Difference between revisions of "Pandora: Documentation en: Log Monitoring"

From Pandora FMS Wiki
Jump to: navigation, search
Line 3: Line 3:
 
= Log Collection =
 
= Log Collection =
  
== Introduction ==
+
==Introduction==
  
Pandora FMS is a monitoring system which mainly collects events and performance information. Sometimes, it's used to monitor the result of a certain command's output in form of a string. The same mechanism (which is called 'command execution parsing') is used to execute expressions (as a single, as a match or by regular expression) within a log, returning only the matched information or the number of matches.
+
Pandora FMS is a monitoring system that mainly collects events, and performance information in numerical data. Sometimes it is used to monitor the outcome of certain commands in text form, the same mechanism is used to "find" certain expressions or pieces of text within a log, returning only that information, not the entire log.
  
You may also use Pandora FMS to count the number of files in a log or single matches (by using 'grep') on a file, but that is monitoring, not log-collection.
+
Pandora can also be used to count the number of occurrences of an expression in a log, or just the total number of lines of a file. Either way that would be numerical monitoring, it would not be neither monitoring nor collection of logs.
  
The biggest problem regarding massive log collection is the huge sizes they can grow to. We're talking about environments starting at 100MB a day to others where 1GB per hour is considered normal. That means: Information of such dimensions cannot be processed, normalized and stored in a database - it's simply impossible.
+
The big issue of massive log collection is the large size these logs occupy. We are talking about environments ranging from 100MB on a daily basis to hundreds of MB per day. This means that this information can not be stored in the database.
  
Until now, Pandora FMS didn't have a solution to this problem - but with Version 5, Pandora FMS Enterprise offers a solution to manage hundreds of MBs per day in the form of log files. This solution allows the same agents used for monitoring to be reused to collect information from event logs (Windows) or in the form of text-file logs. It utilizes a syntax which is very similar to the one of the current log-monitoring modules.
+
So far, Pandora FMS did not have a solution to this problem, but with version 5.0, <b>Pandora FMS Enterprise</b> offers a solution to manage hundreds of mega of daily data. This solution allows the reuse of the same monitoring agents for the specific log data collection, using a very similar syntax to the current one for log monitoring.
  
<center>
+
From the 7.0NG 712 version, Pandora incorporates <b>LogStash + ElasticSearch</b> to store log information, which means a substantial improvement in performance.
[[Image:log-esquema-1.png|650px]]
 
</center>
 
  
The logs which are going to be managed by Pandora FMS Agents (event-log or plain-text files) are stored in a special directory in the original RAW format on the Pandora FMS Server which was specified in the moment of configuration.
 
  
<center>
+
== How it works ==
[[Image:log-esquema-2.png|650px]]
+
The process is simple:
</center>
 
  
The Pandora FMS Data Server receives the XML file from the agent which contains the information gained by the monitoring and log sources in its original format. It stores the log information on the hard drive. The monitoring information is going to be processed as usual.
+
<center><br><br>
 +
[[Image:Esquemas-logs.png|650px]]
 +
</center><br><br>
  
<center>
+
* The logs analysed by the agents ('''eventlog''' or text files) are forwarded to Pandora Server, literally as (RAW) within the XML reporting agent:
[[Image:log-esquema-3.png|650px]]
+
* Pandora server (DataServer) receives the XML agent, which contains information about both monitoring and logs.
</center>
+
*  When DataServer processes the XML data, it identifies log information, keeping in the primary database the  references about  the agent that was reported and the source of the log , automatically sending information to LogStash in order to be storaged.
 +
* LogStash stores the information in Elasticsearch.
 +
* Finally, we can check the log information through the viewfinder in Pandora FMS console. The console will perform queries against the configured Elasticsearch server.
  
All log information is arranged on the hard drive, using a directory hierarchy by date, so the system is able to quickly locate all information - no matter how large your repository might be. This system is well known and it's also the standard for extensive data searches and storage tasks.
 
  
<center>
+
== Configuration ==
[[Image:log-esquema-4.png|650px]]
 
</center>
 
  
== Setup ==
+
=== Server Configuration ===
  
First, you're required to activate this feature within the console. It's in a special section in the setup, as you can see on the following picture under the 'Activate Log Collector' option in the 'Enterprise' tab:
+
The new storage log system, which is based on ElasticSearch + LogStash requires configuring various components.
  
<center>
 
[[image:activate_logcollection.png|750px]]
 
</center>
 
  
After enabling this option within the setup, you may set up some other specific options for the log collection in the 'log collector' tab. You're able to define the directory where the Pandora FMS Data Server is going to store the log files. It should be BIG. Please keep in mind that logs can accrete to several Terabytes in a few days!
+
==== Server Requirements ====
  
<center>
+
Each component (Pandora FMS Server, Elasticsearch, LogStash) can be distributed on separate servers.
[[image:log_collection_setup.png|750px]]
 
</center>
 
  
Of course, you're also able to setup the max. number of days you intend to keep this data on your hard drives. Any data above the specified limit is going to be automatically deleted from the servers.
+
If you choose to place Elasticsearch and LogStash on the same server we recommend:
  
If you activate or deactivate the log collection feature, you're required to restart the Pandora FMS Server in order for the changes to take effect. If you want to store a huge amount of data but don't intend to create any interference to your real-time operations under Pandora FMS, it's recommended to setup a remote hard drive by using NFS to store all the information in that directory (we recommend SAN disks for this task). Another '''complementary''' option is to set up '''two''' data servers to send the most 'dense' information to the 'big one' which possesses the better hard drives.
+
* At least 4GB of RAM
 +
* At least 2 CPU cores
 +
* At least 20GB of disk space for the system
 +
* At least 50GB of disk space for the mount point / var, mounted as LVM
 +
* Connectivity with the port 10516/TCP from Pandora server to LogStash and 9200/TCP from the Pandora console to Elasticsearch
  
== Search and Visualization ==
+
If you have a machine that hosts historical database, this same one could be used to install Elasticsearch and LogStash. In that case the minimum requirements of the machine should be adjusted to the number of data that we will process in both cases, being the minimum:
  
In a log collection tool, we tend to look for two main features:
+
* At least 4GB of RAM
 +
* At least 4 CPU cores
 +
* At least 20GB of disk space for the system
 +
* At least 50GB of disk space for the mount point / var, mounted as LVM
  
'''1.''' The search for information, filtering by date and / or source.<br>
 
'''2.''' To visualize the information drawn as occurrences per defined time unit.<br>
 
  
In the example below, we've searched for any data source which was gained from October 23 to October 24:
+
==== Installing and configuring ElasticSearch ====
 +
Before you begin installing these components, you must install Java on the machine:
  
<center><i>Data visualization example</i>
+
yum install java
[[image:log_view_1.png|750px]]
 
</center>
 
  
<center><i>View of matches through time</i>
+
Once installed, install Elasticsearch from the downloadable RPM from the website of the Elasticsearch project : https://www.elastic.co/downloads/elasticsearch
[[image:log_view_2.png|750px]]
 
</center>
 
  
You're also able to utilize some filters to select the information you intend to find. The most obvious ones are the time range and others like modules or the source of information (which is defined in the log collector within the agent) and the agent itself where the information originates.
+
Configure the service:
  
<center>
+
We will configure the network options and ‘’optionally’’ the data locations (and logs from Elasticsearch itself) in the configuration file located in ''/etc/elasticsearch/elasticsearch.yml''
[[image:Log view filter.png|700px]]
 
</center>
 
  
The most important and useful field is definitely the 'string search' (search in the capture). As in the above mentioned case, this ought to be a simple text string or a regular expression in form of an IP address:
+
# ---------------------------------- Network -----------------------------------
 +
# Set the bind address to a specific IP (IPv4 or IPv6):
 +
network.host: 0.0.0.0
 +
# Set a custom port for HTTP:
 +
http.port: 9200
 +
# ----------------------------------- Paths ------------------------------------
 +
# Path to directory where to store the data (separate multiple locations by comma):
 +
#path.data: /var/lib/elastic
 +
# Path to log files:
 +
#path.logs: /var/log/elastic
 +
 
 +
 
 +
It will be needed to adjust the options of the resources allocated to Elasticsearch, adjusting the parameters available in the configuration file located in en ''/etc/elasticsearch/jvm.options''
 +
 
 +
# Xms represents the initial size of total heap space
 +
# Xmx represents the maximum size of total heap space
 +
-Xms512m
 +
-Xmx512m
 +
 
 +
 
 +
Start the service:
 +
 
 +
systemctl start elasticsearch
 +
 
 +
==== Installing and configuring LogStash ====
 +
 
 +
Install LogStash from the downloadable RPM from the project website Elasticsearch: https://www.elastic.co/downloads/logstash
 +
 
 +
Configuring the service
 +
 
 +
Within logstash configuration there are three configuration blocks:
 +
* Input: Indicates how you get the information to logstash, format, port, and an identifier that is used to store information internally in elastic
 +
* Filter: You can add a post-processing here, but in our case is not necessary, so we'll leave it empty.
 +
* Output: Here comes the configuration of the IP and port where it will be listening Elasticsearch, this is the place where the information processed by logstash  will be saved.
 +
 
 +
 
 +
Configuration file:
 +
 
 +
/etc/logstash/conf.d/logstash.conf
 +
 
 +
 
 +
Example of configuration file:
 +
 
 +
# This input block will listen on port 10514 for logs to come in.
 +
# host should be an IP on the Logstash server.
 +
# codec => "json" indicates that we expect the lines we're receiving to be in JSON format
 +
# type => "rsyslog" is an optional identifier to help identify messaging streams in the pipeline.
 +
input {
 +
  tcp {
 +
    host  => "0.0.0.0"
 +
    port  => 10516
 +
    codec => "json"
 +
    type  => "pandora_remote_log_entry"
 +
  }
 +
}
 +
# This is an empty filter block.  You can later add other filters here to further process
 +
# your log lines
 +
filter { }
 +
output {
 +
  elasticsearch { hosts => ["localhost:9200"] }
 +
}
 +
 
 +
 
 +
Start the service:
 +
 
 +
systemctl start logstash
 +
 
 +
==== Configuration parameters in Pandora FMS Server ====
 +
 
 +
You will need to add the following configuration so that Pandora FMS DataServer processes the log information.
 +
 
 +
'''Important:''' Any log that reaches pandora without having this configuration active, will be '''discarded'''.
 +
 
 +
logstash_host eli.artica.lan
 +
logstash_port 10516
 +
 
 +
 
 +
==== Recommendations ====
 +
 
 +
===== Log rotation for Elasticsearch and Logstash =====
 +
 
 +
'''Important:''' We recommend creating a new entry for daemon rotation logs in en /etc/logrotate.d, to prevent Elasticsearch or LogStash logs from growing without measure:
 +
 
 +
cat > /etc/logrotate.d/elastic <<EOF
 +
/var/log/elastic/elaticsearch.log
 +
/var/log/logstash/logstash-plain.log {
 +
        weekly
 +
        missingok
 +
        size 300000
 +
        rotate 3
 +
        maxage 90
 +
        compress
 +
        notifempty
 +
        copytruncate
 +
}
 +
EOF
 +
 
 +
===== Purging of Indexes =====
 +
 
 +
You can check at any time the list of indexes and size that occupy by launching a cURL petition against its ElasticSearch server:
 +
 
 +
curl -q http://elastic:9200/_cat/indices?
 +
 
 +
 
 +
To remove any of these indexes you can execute the DELETE command:
 +
 
 +
curl -q -XDELETE http://elastic:9200/logstash-2017.09.06
 +
 
 +
This will free up the space used by the removed index.
 +
 
 +
 
 +
 
 +
=== Console Settings ===
 +
To enable the system display of logs, you must enable the following configuration:
 +
 
 +
<br><center>
 +
[[image:activate_logcollection.png|850px]]
 +
<br></center>
 +
 
 +
Then we can set the log viewer behaviour in the 'Log Collector' tab:
 +
 
 +
<br><center>
 +
[[image:Log_config_consola.PNG|850px]]
 +
<br></center>
 +
 
 +
On this screen you can configure:
 +
 
 +
* IP or FQDN address of the server that hosts the Elasticsearch service
 +
 
 +
* Port through the one that is being given by the service Elasticsearch
 +
 
 +
* Number of logs being shown. To speed up the response of the console, it has been added dynamic loading of records. To use this, the user must scroll to the bottom of the page, forcing the loading of the next set of available records. The size of these groups can be set in this field as the number of records per group.
 +
 
 +
* Days to purge: To prevent the size of the system, you can define a maximum number of days in which  the log information will be stored , from that date they will be automatically deleted in the process of cleaning Pandora FMS.
 +
 
 +
 
 +
== Migration to LogStash + Elasticsearch system ==
 +
 
 +
After setting the new storage system of logs, you can migrate all data previously stored in Pandora, as distributed in the new system directories.
 +
 
 +
 
 +
To migrate to the new system, you must run the following script you can find in /usr/share/pandora_server/util/
 +
 
 +
 
 +
# Migrate Log Data < 7.0NG 712 to >= 7.0NG 712
 +
/usr/bin/pandora_migrate_logs /etc/pandora/pandora_server.conf
 +
 
 +
== Display and Search ==
 +
 
 +
When we talk about a tool of log collection, we are mainly interested in two things: search for information, filtering by date and/or data sources, and to see that information drawn on occurrences per time unit. In this example, we are looking through last week, any data source that contains the expression "named".
 +
 
 +
<br><center><i>Data View</i>
 +
[[image:log_view_1.png|850px]]
 +
<br></center>
 +
 
 +
<br><center><i>View occurrences over time</i>
 +
[[image:log_view_2.png|850px]]
 +
<br></center>
 +
 
 +
There is a series of filters that can be used to display information: The most obvious one is the range of dates (beginning and end), and others as the module or source of information (defined when configuring the log collector in agent) and the source agent of log:
 +
 
 +
<br><center>
 +
[[image:Log view filter.png|850px]]
 +
<br></center>
 +
 
 +
The most important and useful field for us will be the search string (search on the screenshot). This can be a simple text string, as in the previous case or a regular expression, such as an IP address:
  
 
  192.168.[0-9]+.[0-9]+
 
  192.168.[0-9]+.[0-9]+
  
As seen on the picture below, the search task looks for data which looks like an IP, within the range 192.168.0.0/16 within the defined interval date / time on any data source.
+
As shown on the screenshot below, it searches the date/time interval defined (the last hour) for any data source, any data that "looks like" an IP within the range 192.168.0.0/16
  
<center>
+
<br><center>
[[image:Event_log_3.png|550px]]
+
[[image:Event_log_3.png|850px]]
</center>
+
<br></center>
  
== Agent Setup ==
+
== Configuring agents ==
  
These are two examples to capture log information under Windows and UNIX:
+
The log collection is done by agents, both Windows and Unix agents (Linux, MacOsX, Solaris, HP-UX, AIX, BSD, etc). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.
  
=== Under Windows ===
+
Here are two examples to capture log information on windows and Unix:
 +
 
 +
=== Windows ===
  
 
  module_begin
 
  module_begin
Line 107: Line 266:
 
  module_end
 
  module_end
  
In both cases, the only difference between a monitoring module and a log source definition is this item:
+
In both cases, the only difference from monitoring module to the definition of a log source is:
  
 
  module_type log  
 
  module_type log  
  
This new syntax is only going to be understood by the new Version 5 Agent. If you intend to use this new feature, you're required to upgrade the agents to version 5.
+
This new syntax only understands the agent version 5.0, so you must update the agents if you want to use this new enterprise feature.
  
=== Under UNIX ===
+
=== Unix Systems ===
  
Under UNIX, you're going to use a new plug in that comes along with the new Agent of Pandora FMS 5. Its syntax is quite simple:
+
In Unix a new plugin that comes with version 5.0 agent is used. Its syntax is simple:
  
 
  module_plugin grep_log_module /var/log/messages Syslog \.\*
 
  module_plugin grep_log_module /var/log/messages Syslog \.\*
  
Similar to the log parser plugin (grep_log), the plug in named 'grep_log_module' sends the processed information to the log collector named 'syslog' as the source of the log file. We recommend using the regular expressions \. \ * (in this case 'all') as a pattern when choosing which ship lines and which doesn't.
+
Similar to plug parsing logs (grep_log) grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the regular expression \.\* (In this case "all") as the  pattern when choosing which lines we will send and which ones we won’t.
 +
 
  
 
[[Pandora:Documentation_en|Go back to Pandora FMS documentation index]]
 
[[Pandora:Documentation_en|Go back to Pandora FMS documentation index]]

Revision as of 07:02, 7 September 2017

Go back Pandora FMS documentation index

1 Log Collection

1.1 Introduction

Pandora FMS is a monitoring system that mainly collects events, and performance information in numerical data. Sometimes it is used to monitor the outcome of certain commands in text form, the same mechanism is used to "find" certain expressions or pieces of text within a log, returning only that information, not the entire log.

Pandora can also be used to count the number of occurrences of an expression in a log, or just the total number of lines of a file. Either way that would be numerical monitoring, it would not be neither monitoring nor collection of logs.

The big issue of massive log collection is the large size these logs occupy. We are talking about environments ranging from 100MB on a daily basis to hundreds of MB per day. This means that this information can not be stored in the database.

So far, Pandora FMS did not have a solution to this problem, but with version 5.0, Pandora FMS Enterprise offers a solution to manage hundreds of mega of daily data. This solution allows the reuse of the same monitoring agents for the specific log data collection, using a very similar syntax to the current one for log monitoring.

From the 7.0NG 712 version, Pandora incorporates LogStash + ElasticSearch to store log information, which means a substantial improvement in performance.


1.2 How it works

The process is simple:



Esquemas-logs.png



  • The logs analysed by the agents (eventlog or text files) are forwarded to Pandora Server, literally as (RAW) within the XML reporting agent:
  • Pandora server (DataServer) receives the XML agent, which contains information about both monitoring and logs.
  • When DataServer processes the XML data, it identifies log information, keeping in the primary database the references about the agent that was reported and the source of the log , automatically sending information to LogStash in order to be storaged.
  • LogStash stores the information in Elasticsearch.
  • Finally, we can check the log information through the viewfinder in Pandora FMS console. The console will perform queries against the configured Elasticsearch server.


1.3 Configuration

1.3.1 Server Configuration

The new storage log system, which is based on ElasticSearch + LogStash requires configuring various components.


1.3.1.1 Server Requirements

Each component (Pandora FMS Server, Elasticsearch, LogStash) can be distributed on separate servers.

If you choose to place Elasticsearch and LogStash on the same server we recommend:

  • At least 4GB of RAM
  • At least 2 CPU cores
  • At least 20GB of disk space for the system
  • At least 50GB of disk space for the mount point / var, mounted as LVM
  • Connectivity with the port 10516/TCP from Pandora server to LogStash and 9200/TCP from the Pandora console to Elasticsearch

If you have a machine that hosts historical database, this same one could be used to install Elasticsearch and LogStash. In that case the minimum requirements of the machine should be adjusted to the number of data that we will process in both cases, being the minimum:

  • At least 4GB of RAM
  • At least 4 CPU cores
  • At least 20GB of disk space for the system
  • At least 50GB of disk space for the mount point / var, mounted as LVM


1.3.1.2 Installing and configuring ElasticSearch

Before you begin installing these components, you must install Java on the machine:

yum install java

Once installed, install Elasticsearch from the downloadable RPM from the website of the Elasticsearch project : https://www.elastic.co/downloads/elasticsearch

Configure the service:

We will configure the network options and ‘’optionally’’ the data locations (and logs from Elasticsearch itself) in the configuration file located in /etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
#path.data: /var/lib/elastic
# Path to log files:
#path.logs: /var/log/elastic


It will be needed to adjust the options of the resources allocated to Elasticsearch, adjusting the parameters available in the configuration file located in en /etc/elasticsearch/jvm.options

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m


Start the service:

systemctl start elasticsearch

1.3.1.3 Installing and configuring LogStash

Install LogStash from the downloadable RPM from the project website Elasticsearch: https://www.elastic.co/downloads/logstash

Configuring the service

Within logstash configuration there are three configuration blocks:

  • Input: Indicates how you get the information to logstash, format, port, and an identifier that is used to store information internally in elastic
  • Filter: You can add a post-processing here, but in our case is not necessary, so we'll leave it empty.
  • Output: Here comes the configuration of the IP and port where it will be listening Elasticsearch, this is the place where the information processed by logstash will be saved.


Configuration file:

/etc/logstash/conf.d/logstash.conf


Example of configuration file:

# This input block will listen on port 10514 for logs to come in.
# host should be an IP on the Logstash server.
# codec => "json" indicates that we expect the lines we're receiving to be in JSON format
# type => "rsyslog" is an optional identifier to help identify messaging streams in the pipeline.
input {
 tcp {
    host  => "0.0.0.0"
    port  => 10516
    codec => "json"
    type  => "pandora_remote_log_entry"
 }
}
# This is an empty filter block.  You can later add other filters here to further process
# your log lines
filter { }
output {
  elasticsearch { hosts => ["localhost:9200"] }
}


Start the service:

systemctl start logstash

1.3.1.4 Configuration parameters in Pandora FMS Server

You will need to add the following configuration so that Pandora FMS DataServer processes the log information.

Important: Any log that reaches pandora without having this configuration active, will be discarded.

logstash_host eli.artica.lan
logstash_port 10516


1.3.1.5 Recommendations

1.3.1.5.1 Log rotation for Elasticsearch and Logstash

Important: We recommend creating a new entry for daemon rotation logs in en /etc/logrotate.d, to prevent Elasticsearch or LogStash logs from growing without measure:

cat > /etc/logrotate.d/elastic <<EOF
/var/log/elastic/elaticsearch.log
/var/log/logstash/logstash-plain.log {
       weekly
       missingok
       size 300000
       rotate 3
       maxage 90
       compress
       notifempty
       copytruncate
}
EOF
1.3.1.5.2 Purging of Indexes

You can check at any time the list of indexes and size that occupy by launching a cURL petition against its ElasticSearch server:

curl -q http://elastic:9200/_cat/indices?


To remove any of these indexes you can execute the DELETE command:

curl -q -XDELETE http://elastic:9200/logstash-2017.09.06

This will free up the space used by the removed index.


1.3.2 Console Settings

To enable the system display of logs, you must enable the following configuration:


Activate logcollection.png


Then we can set the log viewer behaviour in the 'Log Collector' tab:


Log config consola.PNG


On this screen you can configure:

  • IP or FQDN address of the server that hosts the Elasticsearch service
  • Port through the one that is being given by the service Elasticsearch
  • Number of logs being shown. To speed up the response of the console, it has been added dynamic loading of records. To use this, the user must scroll to the bottom of the page, forcing the loading of the next set of available records. The size of these groups can be set in this field as the number of records per group.
  • Days to purge: To prevent the size of the system, you can define a maximum number of days in which the log information will be stored , from that date they will be automatically deleted in the process of cleaning Pandora FMS.


1.4 Migration to LogStash + Elasticsearch system

After setting the new storage system of logs, you can migrate all data previously stored in Pandora, as distributed in the new system directories.


To migrate to the new system, you must run the following script you can find in /usr/share/pandora_server/util/


# Migrate Log Data < 7.0NG 712 to >= 7.0NG 712
/usr/bin/pandora_migrate_logs /etc/pandora/pandora_server.conf

1.5 Display and Search

When we talk about a tool of log collection, we are mainly interested in two things: search for information, filtering by date and/or data sources, and to see that information drawn on occurrences per time unit. In this example, we are looking through last week, any data source that contains the expression "named".


Data View

Log view 1.png



View occurrences over time

Log view 2.png


There is a series of filters that can be used to display information: The most obvious one is the range of dates (beginning and end), and others as the module or source of information (defined when configuring the log collector in agent) and the source agent of log:


Log view filter.png


The most important and useful field for us will be the search string (search on the screenshot). This can be a simple text string, as in the previous case or a regular expression, such as an IP address:

192.168.[0-9]+.[0-9]+

As shown on the screenshot below, it searches the date/time interval defined (the last hour) for any data source, any data that "looks like" an IP within the range 192.168.0.0/16


Event log 3.png


1.6 Configuring agents

The log collection is done by agents, both Windows and Unix agents (Linux, MacOsX, Solaris, HP-UX, AIX, BSD, etc). In the case of Windows agents, you can also obtain information from the Windows Event Viewer, using the same filters as in the monitoring module event viewer.

Here are two examples to capture log information on windows and Unix:

1.6.1 Windows

module_begin
module_name Eventlog_System
module_type log
module_logevent
module_source System
module_end 
module_begin
module_name PandoraAgent_log
module_type log
module_regexp C:\archivos de programa\pandora_agent\pandora_agent.log
module_description This module will return all lines from the specified logfile
module_pattern .*
module_end

In both cases, the only difference from monitoring module to the definition of a log source is:

module_type log 

This new syntax only understands the agent version 5.0, so you must update the agents if you want to use this new enterprise feature.

1.6.2 Unix Systems

In Unix a new plugin that comes with version 5.0 agent is used. Its syntax is simple:

module_plugin grep_log_module /var/log/messages Syslog \.\*

Similar to plug parsing logs (grep_log) grep_log_module plugin sends the processed log information to the log collector named "Syslog" as the source of the log. Use the regular expression \.\* (In this case "all") as the pattern when choosing which lines we will send and which ones we won’t.


Go back to Pandora FMS documentation index