Welcome to Pandora FMS Community › Forums › Community support › the changes/tweaks Imade to Pandora_agent 1.2
-
the changes/tweaks Imade to Pandora_agent 1.2
Posted by daggett on December 6, 2006 at 19:21So this is the second one, and it’s for pandora_agent :
pandora_user.conf :
[code:1]# ================================
# Check for Ping response from Google.fr
# on local interface eth0
# ================================
PING_TIME=$(ping http://www.google.fr -c 1 2> /dev/null | grep time= | awk ‘BEGIN {FS=”time=”;} {print $2}’ |cut -d’ ‘ -f1)
# if it cannot connect nothing is returned
echo “” ”
echo “PING_CHECK ”
echo “generic_data ”
echo “$PING_TIME”
echo “
# MODULE END ========================# ================================
# Check for traffic rate
# on local interface eth0
# ================================
TOTALBYTES=$(ifconfig eth0 |grep “RX bytes” | cut -d’:’ -f2 | cut -d’ ‘ -f1)
echo “” ”
echo “TOTAL_TRAFFIC_ETH0 ”
echo “generic_data_inc ”
echo “$TOTALBYTES”
echo “
# MODULE END ========================pandora_agent.conf :
[code:1]# General Parameters
# ==================server_ip Donald
server_port 22
server_path /opt/pandora/pandora_server/data_in
pandora_path /opt/pandora/pandora_agents/linux
temporal /opt/pandora/pandora_agents/linux/data_out
interval 300
debug 0
checksum 0
agent_name Monitoring_Server# Module Definition
# =================# vmstat syntax depends on linux distro and vmstat command version, please check before use it
module_begin
module_name cpu_user
module_type generic_data_inc
module_interval 1
module_exec cat /proc/stat | head -1 | awk ‘{print $2}’
module_max 30000
module_min 0
module_descripcion User CPU Usage (milliseconds)
module_endmodule_begin
module_name disk_root_used
module_type generic_data
module_interval 2
module_exec df -Pkh / | tail -1 | awk ‘{ print $5+0 }’
module_max 100
module_min 0
module_description Free disk Percentage of root partition
module_endNote the -P appended to the df command to avoid collecting nothing if the output of was normaly on 2 lines (see my post or bug reprot concerning this).
The new cpu_user and cpu_sys method takes the amount of 1/100 of a second the computer spend on user/system tasks since the previous value using a generic_data_inc type of data. It’s the same system for monitoring traffic on network interfaces (just above in pandora_user.conf).
pandora_agent.sh :
added this to collect SERVER_PORT from the config file:
[code:1]
if [ ! -z “`echo $a | grep -e ‘^server_port’`” ]
then
SERVER_PORT=`echo $a | awk ‘{ print $2 }’ `
echo “$TIMESTAMP – [SETUP] – Server Port is $SERVER_PORT” >> $PANDORA_HOME/pandora.log
fithen at the end of the file:
[code:1] # Send packets to server and detele it
scp -P $SERVER_PORT $PANDORA_FILES pandora@$SERVER_IP:$SERVER_PATH > /dev/null 2> /dev/nullSancho replied 18 years, 2 months ago 4 Members · 12 Replies -
12 Replies
-
::
Ok, I just made a new modification
added in pandora_agent.conf a new parameter:
keep_files 1and in pandora_agent.sh
[code:1] if [ ! -z “`echo $a | grep -e ‘^keep_files’`” ]
then
KEEP_FILES=`echo $a | awk ‘{ print $2 }’ `
echo “$TIMESTAMP – [SETUP] – Keep XML files until successful transfer is $KEEP_FILES” >> $PANDORA_HOME/pandora.log
fiAnd at the end of the file:
[code:1] # Send packets to server and delete if tranfert completed successfully
rsync -az –remove-sent-files -e “ssh -p$SERVER_PORT” $TEMP/* pandora@$SERVER_IP:$SERVER_PATH > /dev/null 2> /dev/nullif [ “$DEBUG_MODE” == “1” ]
then
echo “$TIMESTAMP – Copying $PANDORA_FILES to $SERVER_IP:$SERVER_PATH” >> $PANDORA_HOME/pandora.log
else
if [ “$KEEP_FILES” == “0” ]
then
# Delete them
rm -f $TEMP/* > /dev/null 2> /dev/null
fi
firsync is much more flexible than scp, so I changed for it.
What it is done here:
when the agent can’t connect to the server, you have now the choice to keep XML data files in data_out until they can successfully be _all_ transfered to the server, then erased (if transfer successful only) by setting $keep_files to 1.
So you don’t loose data anymore when connection between agent and server is down.
If keep_files is set to 0, the XML files will be erased even if they haven’t been transfered (just like in the non-modified version of this script).hope it can be useful, bye
Denis -
-
::
Ok, I just made a new modification
added in pandora_agent.conf a new parameter:
keep_files 1# Send packets to server and delete if tranfert completed successfully rsync -az --remove-sent-files -e "ssh -p$SERVER_PORT" $TEMP/* pandora@$SERVER_IP:$SERVER_PATH > /dev/null 2> /dev/null rsync is much more flexible than scp, so I changed for it. useful, bye Denis
It's a good idea to add a parameter in agent to do not delete XML files (we implement the same feature in 1.1 using DEBUG mode, now debug mode stops before copying data). We had talk some months ago to add a new feature that try to copy, and if cannot copy, stores data and try to send again in the next interval.
Store data could be interesting for debug, or testing purposes, but at this time I can imagine an environment who could need this feature, have you implemented for a specific need or are for testing purposes only ?.
It's interesting RSYNC usage you're doing here. I don't know exactly how it's doing RSYNC there. Can you tell something more about this ?.
-
::
Hi,
what the code I modified does:[code:1] # Send packets to server and delete if tranfert completed successfully
rsync -az –remove-sent-files -e “ssh -p$SERVER_PORT” $TEMP/* pandora@$SERVER_IP:$SERVER_PATH > /dev/null 2> /dev/nullif [ “$DEBUG_MODE” == “1” ]
then
echo “$TIMESTAMP – Copying $PANDORA_FILES to $SERVER_IP:$SERVER_PATH” >> $PANDORA_HOME/pandora.log
else
if [ “$KEEP_FILES” == “0” ]
then
# Delete them
rm -f $TEMP/* > /dev/null 2> /dev/null
fi
fifirst, rsync tries to connect to the server via ssh (to the port $SERVER_PORT) in archive mode (-a) and compresses data before sending (-z).
if successful, then it deletes the XML data files on the client (–remove-sent-files)
if transfer fails, then XML files are kept.Then, if $DEBUG==1 (original behaviour) then unsent XML files are effectively kept until next try.
else (we have $DEBUG==0), if $KEEP_FILES==0 then ALL unsent XML in data_out are deleted. (all the XML files sent by rsync were already deleted by rsync, so it will oftenly delete nothing).rsync is useful in our case because it can verify if files were effectively transfered or not and delete them (agent side) if transfer was OK.
bye
-
::
Hi,
rsync is useful in our case because it can verify if files were effectively transfered or not and delete them (agent side) if transfer was OK.
byeDo you have any problem with SCP before or while using Pandora agents ?. I was thinking that rsync could be a replacement for entire SSH engine for copying files, but authentication could be a problem…. I know that there is RSYNC client for almost every OS, so could be interesting.
-
::
Well yes, the major problem with scp is that we don’t know if the transfer was successfull or not, and then it can only copy files, not delete them if they were transfered.
And it seems that scp stays alive even if it can’t contact the server, there is no time-out.
and I had some scp process still alive after the agent was turned off since one entire night…
Plus rsync is capable of compressing data before sending, so if you have choosen to not erase the untransmitted XML data files when the server is unreachable, you can have a great amount of data to transfer.So, Rsync I think is clearly an upgrade.
And concerning the authentication I can’t see any difficulties using rsync, it uses ssh as well and everything is tunneled using encrypted ssh the only thing to change is the command line in the agent and the script specified as a command in authorized_keys on the server to accept rsync instead of scp (see my post about securization of ssh tranfers).
bye
-
::
Well yes, the major problem with scp is that we don’t know if the transfer was successfull or not, and then it can only copy files, not delete them if they were transfered.
And it seems that scp stays alive even if it can’t contact the server, there is no time-out.
and I had some scp process still alive after the agent was turned off since one entire night…
Plus rsync is capable of compressing data before sending, so if you have choosen to not erase the untransmitted XML data files when the server is unreachable, you can have a great amount of data to transfer.So, Rsync I think is clearly an upgrade.
And concerning the authentication I can’t see any difficulties using rsync, it uses ssh as well and everything is tunneled using encrypted ssh the only thing to change is the command line in the agent and the script specified as a command in authorized_keys on the server to accept rsync instead of scp (see my post about securization of ssh tranfers).
bye
One positive point for rsync is about to know if transfer is succesful. I have no problems in GNU/Linux, Solaris or AIX with “scp” hangup process, they have a timeout based on SSH connection :-?, in what environment have you ever been experienced problems ?
I was talking about using RSYNC outside the SSH tunnel, because most of problems with initial setups are based on problems with SSH automatic auth, could be nasty for people not used to do it. I dont know rsync in detail, so it’s possible that my questions dont have sense at all :-))
We’re working now on idententifying another kind of possible solution for manage connections, authentication and file transfer. We’re thinking about implementing our own transfer system based on a single SSL connection. We share much code with our brother project, Babel Enterprise, and this one has much more bigger XML Data files (some of them about 2-5 MB!), so compresion and file integrity is very important also.
-
::
Hi,
Do you mean some kind of XML-RPC over HTTPS? If you plan to develop a new specific protocol, well I guess it is just reinventing the weel! Plus it won’t be a standard then. Setting up a new protocol should take into account monitoring through the internet. Internet is a very insecure connection where one ought to open either nothing else than a ssh or be an expert in security.
So it seems to me that you are now concentrating your efforts on fine tunning Pandora, as if it was all finished already… I understand your point, getting more attraction will help build up the community and get more momentum. But you said it in a post, there are many other things to change/enhance _before_ trying to make it more user-friendly (computation tasks before storing data, implementing alerts on text data,
Well, my view is:
– SSH is a standard,
– its security is proved and strong,
– it’s not so complicated to make it up and running if you dare play with it a bit,
– which kind of people are pandora installator, who really _needs_ pandora up and running on an LAN or WAN ? (M. Homer Simpson on his house’s 3-computers-LAN?)
– If a system administrator/technician can’t deal with SSH (and authentication protocols), well, … that’s the signal to take time to learn it or follow the tutorial (very good tutorial by the way 🙂 .Security is not an easy piece of cake (or tortilla, buritos or whatever you want :-p ). Getting new transport protocol is a good thing. Tunneling through SSL gives security to data when they are on the link, but it won’t avoid to the pandora admin to be a decently skilled person for server configuration anyway (firewalling, filtering, etc.).
I hope we can quickly get pandora to achieve to be the ultimate monitoring tool!
just my 0.02¤.
bye for now! -
::
Hi,
Do you mean some kind of XML-RPC over HTTPS? If you plan to develop a new specific protocol, well I guess it is just reinventing the weel! Plus it won’t be a standard then. Setting up a new protocol should take into account monitoring through the internet. Internet is a very insecure connection where one ought to open either nothing else than a ssh or be an expert in security.
Ummm, not exactly. We´re planning to develop a protocol ONLY to transfer XML files with an authenthication method, that´s to say, user/password which would be set on the agent.conf
So it seems to me that you are now concentrating your efforts on fine tunning Pandora, as if it was all finished already… I understand your point, getting more attraction will help build up the community and get more momentum. But you said it in a post, there are many other things to change/enhance _before_ trying to make it more user-friendly (computation tasks before storing data, implementing alerts on text data,
As Linux, we´re multi task ;-).
We are doing a lof of things right now, improving Pandora, bugfixes, improving doc, packages…and also disscusing at the office how the new protocol should be…Well, my view is:
– SSH is a standard,
– its security is proved and strong,I agree without a doubt.
I used to think like you untill yesterday when nil and I went to have a meal out of the office and he convinced me about why we should use a new protocol. Simple one with SSL.– it’s not so complicated to make it up and running if you dare play with it a bit,
– which kind of people are pandora installator, who really _needs_ pandora up and running on an LAN or WAN ? (M. Homer Simpson on his house’s 3-computers-LAN?)
– If a system administrator/technician can’t deal with SSH (and authentication protocols), well, … that’s the signal to take time to learn it or follow the tutorial (very good tutorial by the way 🙂 .Of course it´s not if you´re experienced. But even if you´re a guru you could have a slip-up.
I´m not a guru at all, but I have played with ssh and ssh keys for years and some days ago I was setting up a Pandora Agent over Windows (If you have played with Windows Agent you would realized how sorrow the process is). Well, I generated the keys, exported it into OpenSSH format and copied it to the Linux authorized keys (the server I mean), and Pandora Agent couldn´t connect the server, everything seemed to be correct. Finally Nil, who was sitting next to me, noticed that I was using /home/babel instead of /home/pandora.
Things like that make people to had enough of pandora, everybody wants a simple installation method, don´t mind if it´s Homer Simpson or Lisa Simpson.Honestly, who reads the doc before installing something? No one.
I guess, do you prefer to: generate keys, copy its to the right box, check perms….or just set a password (keyboard-based) for all the agents and that´s all? 🙂
Security is not an easy piece of cake (or tortilla, buritos or whatever you want :-p ). Getting new transport protocol is a good thing. Tunneling through SSL gives security to data when they are on the link, but it won’t avoid to the pandora admin to be a decently skilled person for server configuration anyway (firewalling, filtering, etc.).
Yeah, and some years ago, linux users used to say “linux is not for desktop, it´s just for servers and even more, only for real gurus” and now, at least in Spain, linux is being use in high schools by normal users (msn, emule, chat ones), got it? 🙂
Of course security is not as easy as setting up an emule tool, yeah, I agree, but we could make it easier. And the new protocol we´re thinking about is over SSL, don´t worry, dude 🙂I hope we can quickly get pandora to achieve to be the ultimate monitoring tool!
just my 0.02¤.
bye for now!You give us more than 0.02 ¤ 😉
We really apreciate your contribution, stay tuned 🙂
Enjoy pandora! -
::
Ok, thanks, you’re doing the biggest part of job, and that’s a great job.
I’m doing quiet an easy part, it’s just configuration.Anyway it’s just my view: that’s just that I prefer very secure SSh, but there is really no problem with an easier way to transfer XML files. I will continue using and participate to Pandora anyway.
I have developped some “automated” installation scripts for agents, console and server that I will post here once they’ll be tested up.
bye
-
-
::
Hi,
– which kind of people are pandora installator, who really _needs_ pandora up and running on an LAN or WAN ? (M. Homer Simpson on his house’s 3-computers-LAN?)
– If a system administrator/technician can’t deal with SSH (and authentication protocols), well, … that’s the signal to take time to learn it or follow the tutorial (very good tutorial by the way 🙂 .Security is not an easy piece of cake (or tortilla, buritos or whatever you want :-p ). Getting new transport protocol is a good thing. Tunneling through SSL gives security to data when they are on the link, but it won’t avoid to the pandora admin to be a decently skilled person for server configuration anyway (firewalling, filtering, etc.).
bye for now!
I like very much your point of view 😉 but I have experience some problems in enviroments where SSH is not available, and FTP with .netrc is a real pain in the ass. We’re thinking about to use a very simple TCP/SSL transfer solution, not a real protocolol, only a small filetransfer utility that uses simple password to authenticate, so we don’t need to re-ivent the wheel 😉
Thanks for your feedback, this makes us feel in the correct way ! 🙂