Augmentez la puissance de votre surveillance. Pandora FMS s’intègre aux principales plateformes et solutions cloud.
Helpdesk puissant et flexible pour les équipes d’assistance et de service à la clientèle, aligné sur les processus de la bibliothèque d’infrastructure des technologies de l’information (ITIL).
An extensive collection from detailed guides that break down complex topics to insightful whitepapers that offer a deep dive into the technology behind our software.
Expande el poder de tu monitorización. Pandora FMS es flexible y se integra con las principales plataformas y soluciones en la nube.
Potente y Flexible Helpdesk para equipos de soporte y atención al cliente, alineado con los procesos de Biblioteca de Infraestructura de Tecnologías de Información (ITIL).
Data centers have become an essential element within new technologies, if we add to that the current capabilities of artificial intelligence we have a perfect, superhero pairing, capable of providing us with all kinds of advances and benefits. Yes, we can shout it to the wind: “Blessed is the time in which we live!”
The future: smart data centers
For artificial intelligence to be devoted to scaring us to death through iconic movies like 2001 or Terminator is a thing of the past, today it has other, much more interesting and practical purposes. For example, crowning itself by playing a fundamental role in data processing and analysis. Yes, that’s her, the futuristic AI, increasingly faster, more efficient and, now, necessary to manage data centers.
We know that data is already the element that moves the world. An essential requirement for any operation, be it institutional, business, commercial… This makes data centers one of the most important epicenters of digital transformation. After all, in their physical facilities you may find the equipment and technology that sustains, among other things, the information on which the world economy depends. Centers that store seamlessly data backup and recovery with just one hand, while supporting Cloud applications and transactions with the other. Therefore, they guarantee an ideal climate for investment and opportunities, they boost the economy and encourage and attract a large number of technology companies. They are almost the center of the digital revolution.
Although data centers are not without problems. It is estimated that in the future, three or four years from now, 80% of companies will close their traditional data centers. It’s not foresight madness if you consider the myriad of inconveniences traditional data centers face. I mean a certain lack of preparation for updates, infrastructure problems, environmental deficiencies, etc. But don’t worry, as for so many things, there is a vaccine, a remedy, to take advantage of the advances in artificial intelligence to improve, as far as possible, the functions and infrastructure of data centers.
Forbes Insights already pointed it out in 2020: AI is more than poised to have a huge impact on data centers. In its management, productivity, infrastructure… In fact, they already offer potential solutions to data centers to improve their operations. And data centers, already upgraded by artificial intelligence capabilities, process AI workloads more efficiently.
Power Usage Effectiveness, PUE
As you may guess, data centers consume a lot of energy, which is why an artificial intelligence network is necessary to increase the efficiency of energy use (PUE). The Power Usage Effectiveness or PUE, also equivalent to the total electrical power of the CPD or the total electrical power consumed by the systems, is a metric to calculate the efficiency of data centers.
A couple of years ago, Google was already able to achieve a consistent 40% reduction in the amount of energy used for cooling by deploying Deepmind IA in one of its facilities. This achievement equates to a 15% reduction in overall PUE overload, once electrical losses and other non-cooling issues have been accounted for. It produced the lowest PUE they had ever seen. And the thing is that Deepmind analyzes all kinds of variables within the data center to improve the efficiency of the energy used and reduce its consumption.
Can Smart Data Centers be threatened?
Yes, data centers can also suffer from cyber threats. Hackers do their homework, always finding new ways to breach security and sneak information from data centers. However, the IA once again shows its guts and resources, and learns from normal network behavior to detect threats based on possible irregularities in such behavior. Artificial intelligence can be the perfect complement to the current Security Incidents and Event Management (SIEM) systems, and analyze the inputs of the multiple systems and the incidents, devising an adequate response to each unforeseen event.
Effective management
Through the use of intelligent hardware and IoT sensors, artificial intelligence will show us the effective management of our data center infrastructure. It will automate repetitive work, for example. Activities such as temperature monitoring or the status of the equipment, security, risks of all kinds and the management of refrigeration systems. In addition to carrying out predictive analysis that will help distribute the work among the company’s servers. It will also optimize server storage systems and help find potential system failures, improve processing times, and reduce common risk factors.
AI systems have already been developed that automatically learn to schedule data processing operations on thousands of servers 20-30% faster, completing key data center tasks on the go twice as fast during times of high traffic. They handle the same or higher workload faster using fewer resources. Additionally, mitigation strategies can help data centers recover from data disruption. This immediately turns into a reduction in losses during the interruption and our customers giving us a wide smile of satisfaction.
Well, what do you think of this special union, this definitive combo that artificial intelligence and data centers are and will be? Do you think something can marinate better? Data centers and the Cloud ? N-Able and Kaseya? ,White wine and seafood? Condensed milk and everything else? Leave your opinion in the comments!
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here.
Last but not least, remember that if you have a reduced number of devices to monitor, you may use Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your questions. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
“Adapt or die (and let others take your share of the cake)” is both an evolutionary law and a business law. Without going any further, today, the rise of new technologies and critical applications have led to a substantial change in data centers. It is natural of course, so much data, so much data generated by millions of Internet users wasting their time on the Internet… Data processing centers, or data centers, require new advances and solutions to be able to adapt to the processing of such an amount of information.
Therefore, current data centers are evolving, indeed, in response to this new situation. Improved facilities are now dedicated to supporting higher workloads and higher user traffic. We are talking about renewed systems and technological resources that grant a break, superior applications, shared data, flexibility, and high security for the protection of information.
The market is a jungle , and demand is continually stimulated by new proposals, models and skills that promise to renew the future of the data center. What are data centers evolving to? Let’s check out together some of the most in-demand competencies that will make data centers evolve in the coming future.
The work of data center technicians
Do not forget about them, in the end they are the ones responsible for data centers mostly. Installation, server and network computer maintenance, daily performance monitoring, maintaining a controlled and optimal equipment environment and solving all those unforeseen events that are usually associated with the network and servers. Not to mention the emergencies outside working hours, which will make them leave the shelter of their life as a civilian to go to repair any mess. Therefore, technicians from data centers will be a value to be taken into account by the market. Without a doubt they will take their chances on those that are the best and most prepared in the future. Computer support to staff and clients while they solve the bustle of servers and the network with the other hand. Their work is incalculable!
An architect in the Cloud
IT infrastructures and services in the Cloud, that is where money is invested, at least they are the two most notable factors companies want to take their chances on in recent times, and the appearance of 5G only reinforces their position. They take advantage of faster and more correct data transfers.
The data processing center, the technology company… absolutely everyone wants to focus now on the important factors that surround this investment: security in the Cloud and its architecture. They are looking for that revolutionary architect from the Cloud, with deep knowledge in the field, an architecture project up his sleeve and the final design of a unique product.
Hybrid management
Hyundai and its hybrid cars are not the only ones that have hybridization as their flag, there we have IT management that is also hybrid. Something unified to manage both the infrastructure in the Cloud and the traditional services. The benefits are many, including that hybrid IT management solutions provide key automation across IT functional areas. This encompasses service management, compliance, assurance, and governance.
And it is now that companies are using more AWS, Microsoft Azure and Google Cloud Platform, and other services in the Cloud, when IT administrators must guarantee network bandwidth between applications. Organizations will get into it more than ever.
Data center security
We live in a world where millions of users roam the Internet at ease, which makes managing and protecting data centers considerably more difficult. To ensure higher security, companies have to ensure their data and uninterrupted network performance. That’s why they hire fellow data analysts and cybersecurity architects skilled enough to look over the big picture and create a model of perception and protection against potential threats.
Edge computing
The arrival of edge computing certainly helps IT companies to collect and weigh information from IoT devices. They then transmit that data to a data center, be it remote or local. An edge server, as we know, differs from a source server in closeness to the client machine.
Edge servers store cache content in localized areas helping to ease server load. As the implementation of edge computing progresses, the thinking heads of data centers will look for talents with skills in networking, system design or database modeling and security.
Edge computing, security, hybrid management, architecture in The Cloud and specialized technicians are just some of the specialties towards which data centers are heading in their evolution. So if you are thinking of making a career out of it, this is the right time to rethink it. Ditch what you’re up to and join the demand around data centers. It is not Bitcoin, but it is undoubtedly a more consolidated bet.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here.
Last but not least, remember that if you have a reduced number of devices to monitor, you may use Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your questions. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
What is a CVE and why is it important for your security?
There are “good” hackers. They call themselves security analysts and some even devote their time to working for the common good. They investigate possible vulnerabilities in public and known applications, and when they find a possible security flaw that could endanger the users of those applications, they report that vulnerability to the software manufacturer. There is no reward, they are not paid for it, they do it to make the world safer.
What is a CVE?
This entire process, from the moment the manufacturer accepts the reported vulnerability until it is fixed, is taken to a public reference system called the CVE Database. This is a database maintained by MITRE Corporation (that’s why sometimes it is known as MITRE CVE list) with funds from the National Cyber Security Division of the government of the United States of America.
The CVE Program is an international effort, based on the community and it is based on it to discover vulnerabilities. Vulnerabilities are discovered, assigned and published in the CVE list.
Each CVE uniquely identifies a security problem. This problem can be of different types, but in any case, it is something that if it is not solved but rather stays hidden, someday someone will take advantage of said failure. A CVE simply describes which is the vulnerable application and the version and/or component affected without revealing sensitive information. When the error is corrected, it reports where the solution can be found. Generally a CVE is not made public until the mistake has been corrected, this is especially important, since it guarantees that the users of said application are not subjected to a gratuitous risk when publishing information about the failure. If there were no CVE, researchers would publish such information without coordinating with the manufacturers, producing unacceptable security risks for users who have no way to protect themselves against data that reveals security errors in their systems as users of those applications. Don’t forget that all software vendors have public CVEs published. Nobody is spared.
This consensus between manufacturers and researchers on the way to reveal sensitive information regarding security flaws of an application allows a continuous improvement of the security of public information systems. Although MITRE is originally a US funded organization, there are partner organizations around the world that help to organize CVEs regionally, decentralizing management and helping local manufacturers organize more efficiently.
INCIBE and ARTICA
CVEs are coordinated by CNAs, voluntary organizations that offer themselves to coordinate and resolve disputes when there are conflicting positions between security researchers and manufacturers. The root CNA is MITER, and there are CNAs spread all over the world. Most of the software and hardware manufacturers like Microsoft, CISCO, Oracle, VMware or Dell are CNAs that are part of the CVE program.
INCIBE, the National Cybersecurity Institute of Spain, is a Spanish organization that has recently become a CNA Root, a member with a special status within the CVE hierarchy, as it coordinates the Spanish CNAs. It is also a contact point in the country for receiving vulnerabilities discover n the IT domain, industrial systems and IoT (Internet of Things) devices.
Thanks to its collaboration with INCIBE, ÁRTICA the company behind Pandora FMS, Pandora ITSM and Pandora RC has become the official CNA of CVE. This is especially important as it shows Pandora FMS’s commitment to information system security and makes itself available to researchers from all over the world to work on solving any problem that may affect its users.
From this moment on, the program has two hundred one CNA from thirty two countries, ARTICA being number two hundred all over the world and third in Spain. After joining the program, ARTICA will be able to publicly receive any information related to the security of Pandora FMS, Pandora ITSM or Pandora RC and process the solution of the problem reliably as well as its public communication.
Our vulnerability management policy allows us to assure any Pandora FMS user that any problem will be dealt with rigorously, prioritizing the impact and mitigating risk in productive environments, while guaranteeing the researcher correct reception, communication and publication in the open of his/her work.
Vulnerability disclosure policy in Pandora FMS
At Pandora FMS, we have a very open policy in this regard. Pandora FMS was born with an open philosophy, this not only means open source, it also means free knowledge and, of course, process transparency. We have a fully public and transparent vulnerability disclosure policy. Over the years, different researchers have contacted us to report security problems in Pandora FMS. Yes, we too have had, and will have, security flaws. And thanks in part to the selfless work of security researchers, we have been correcting many of these flaws. We are so compliant and honest that we publish them ourselves in a list of known vulnerabilities on our own website.
Security bug reports generally have a life cycle that allows users to avoid the added risk of publishing information about software bugs ahead of time, before the manufacturer has been able to create a patch and distribute it in good time to its users. In this process, the security breach remains in a waiting stage, where the manufacturer accepts the reported problem and agrees on a date to solve the problem. The security researcher waits patiently and makes the solution of the problem as easy as possible: providing more information, collaborating with the development team, even doing some additional testing when the patch is available. The point is to work as a team to improve the robustness of the software.
The e-mail box [email protected] is open to anyone with an interest in improving the security of our software.
We would love to say that companies, above all else, value their employees, but it would be as naive as it is false. Yes, because at the top of the companies’ scale of values is data. The precious data. Data that actually only plays an important role when properly stored. And here is where the data warehouses come in.
What exactly is a data warehouse?
A data warehouse is actually a way of managing your data, specially designed, of course, to support business activities, especially those related to analytics. Enterprise data warehouses contain, of course, vast amounts of historical data to collate, query, pattern or analyze. These data, which the warehouse centralizes, come from a wide and different range of sources. We have the type: application log files, transaction applications, etc.
Apart from centralizing data and unifying their sources, data warehouses help in the decision-making process. This is because they contain valuable raw business knowledge. A very rich historical record for analysts and data experts. And from them, from the experts, we have taken the main advantages of data warehouses:
Source tracking and verification Thanks to data warehouses, we may trace the data to its source and verify both the information as well as the root it comes from. That way we will be able to store this source in our database and always ensure consistent and relevant information.
Sifting relevant data for companies. Once in the system, the quality and integrity of the data is guaranteed. Companies will only have useful data, those necessary for their activities, since the data warehouse format predisposes the analysis of their information at any time and under any circumstance. No one should any longer depend on a hunch or rash from the decision-maker, incomplete or poor quality data. The results will be fast and accurate.
In the data warehouse, the data is copied and processed, integrated and restructured, in advance, in a Semantic Data Store. This makes any analysis process much easier.
Imagine analyzing large amounts of data of all kinds and retrieving a value from them in a specific and precise way.
Types of data warehouses
If we strictly stick to company data warehouses, today we can have three main types:
Enterprise Data Warehouse (EDW): A data warehouse that contains the business data of a business and that includes all the information about its customers. It enables data analysis and can provide actionable insights. It also offers a unified approach to organizing and representing such data.
Operational Data Warehouse (ODS): We are faced with a central database that provides us with a snapshot of the freshest data from multiple transactional systems so that we can prepare operational reports. The ODS enables organizations to combine data in its original format, from several sources, to produce business reports.
Data market: It focuses on a single functional area of an organization and encompasses a subset of stored data. The data marketplace is specially designed for use by a specific department or set of users in an organization. We are talking about a condensed version of the data warehouse.
Small retrospective
Most would stop the clock on their time machine in 1980, where they believe that the concept of the data warehouse arises, but we would have to let it run a little further back, to the hippy sixties. When Dartmouth and Mills develop the term dimension and facts in a collaborative project.
Then we would advance to the seventies to witness how Nielsen and IRI introduce Dimensional Data Marts for retail sales, Tera Data Corporation launches a database management system prepared to help and assist in making decisions, and then, after a decade of progress, in the eighties, where the first implementation of a data warehouse emerged by the hand of Paul Murphy and Barry Devlin, IBM workers.
From the data warehouse to the Cloud?
As we have already seen in previous articles, the coronavirus pandemic that has devastated our planet has a lot to do with the new technological restructuring and with the religious ascents to the Cloud. It is also, of course, to blame for moving data warehouses to Cloud platforms.
On-premise data warehouses have great advantages: security, speed, etc. But they are not that elastic, and the foresight to determine how to scale the data warehouse, regarding future needs, is quite complex. During the famous Confinement, most moved to the Cloud and the data warehouses were going to follow their example of course. Even those in large companies, those who no one thought they could abandon their local data centers, are switching to the Cloud to make the most out of its advantages. That flexibility in computing and storage. Its ease of use, its versatile management and its profitability.
Tomorrow: Automation of the data warehouse
The list of issues a data warehouse deals with is still there: data integration, data views, data quality, optimization, competing methodologies, and so on. However, we can find an answer: warehouse automation..
With data warehouse automation, a data warehouse can use the latest technology for pattern-based automation and advanced design processes. This allows you to automate the planning, modeling and integration steps of the entire life cycle. We are faced with what seems like a very efficient alternative to traditional data warehouse design, one that reduces time-consuming tasks such as generating and deploying ETL codes on a database server.
After this long journey through the life and exploits of the data warehouses, we say goodbye, as you can see, focusing on the answers that it promises to give us in the near future. We will always be positive in the matter.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here.
Last but not least, remember that if you have a reduced number of devices to monitor, you may use Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your questions. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
Privacy policies in three other countries outside the EU
Are you not a little curious? Even a little bit, right under your chin or your temple about how they deal with privacy policies in other countries? Aren’t you? Well, surprise! Today, in Pandora FMS blog, we are going to get it out of our system by discussing how they do it, how they deal with the protection of international data and privacy, in at least three countries outside the European Community.
We are not going to choose countries at random, we leave that for a special of where we would go on vacation in Pandora FMS. The three countries we have chosen have one thing in common: they have initiated data protection reforms. They want to guarantee 100% the safety of their peers by offering them an improved data protection law.
This decision by these three countries is very likely due to the current pandemic, you know Covid-19 everywhere. With the almighty Internet as the systematic platform for sharing data, crooks had an obvious target. So in there we have been able to see, for some time now, countless data breaches and cybersecurity fraud. Therefore the demand for data security has proportionally generated concern and a large number of countries have decided, due to pressure, to reform their archaic and moth-eaten privacy and data protection policy frameworks. This is absolutely necessary. We have already seen it in film sagas such as James Bond or the Bourne Case, every country worth its salt handles sensitive data to protect.
Ó Pátria amada, idolatrada, salve, salve, Brasil.
We transport ourselves to the sunny and fine sands of the beaches of Brazil to find that the country approved the National Internet Law back in 2014, and that this same law defined the policies on data processing on the network. The strengths of this legislation considered consent as the strategy to follow and the fact that minors under 16 have restricted the exchange of personal data.
Brazil is currently preparing to introduce a new data protection plan through an ANPD (National Data Protection Authority). In fact, it has already published its normative strategy for the 2021-2023 fork. This ANPD wants to strengthen data protection in the country through the development of regulations, a new claim management for data breaches and adherence to the LGPD. These new privacy policies are not without certain similarities with the GDPR of our EU.
In case you didn’t quite understand the data, the LGPD is the General Law for the Protection of Personal Data, which we have among us since August 2020. Its function is to regulate the use and collection of personal data by all companies that do business and market in Brazil. It goes without saying that all these companies we are talking about must comply with the policies of the new law. Law that perfectly defines the penalties for violation and requires companies to comply with all its points. It also aims to give Brazilians some fundamental rights to improve their control over their data.
O Canada! Our home and native land!
The country of elk and maples has recently submitted different amendments to its data privacy law, now proposing the Consumer Privacy Protection Act (CPPA). Bill C-11, Digital Charter Implementation Law, replaces the previous data privacy law known as PIPEDA (Personal Information and Electronic Document Protection Law). Indeed, Canada, which has always strived, both to hunt down Bigfoot, and to ensure data privacy. Although it must be said that their legislative acts on the subject are sometimes limited to the private, commercial and institutional sectors. The power to enforce the rules of this law is shared between the Office of the Privacy Commissioner and the Court of Personal Information and Data Protection.
Article eight of this new law vows to protect citizens from unreasonable searches and seizures. The Consumer Privacy Protection Act also introduces restrictions on the collection, use and disclosure of personal information by any private entity and imposes high penalties for infringing it or failing to report an infringement.
This new law is based on the consent of citizens, but, to keep everyone happy, it also allows companies to use certain validation and consent strategies to collect personal data. Citizens may withdraw their consent in the future, if they wish, and request the deletion of their data.
Oh say, can you see, by the dawn’s early light…
There is no other way, unlike the most reasonable countries, the United States does not have a strict data privacy policy. What they actually have is a statewide compliance policy, which varies in rules, guidelines, and penalties. We are faced with several federal laws specific to each sector and with privacy laws, as we have said, at state level. Who regulates these privacy laws? Well, the thing is in the hands of the Federal Trade Commission.It is in California where we find ourselves with the strictest privacy policies. These policies give the individual the right to full transparency of the data used by companies and the provision not to disclose their data if they do not wish to do so.
Currently, there are many US states that are expanding their data policies. Since the pandemic, it is an unavoidable need.
Update or die, you know and especially regardingsecurity anddefense of our data. If you liked this article in which we visited different countries, leave us a comment, down there, with the country that you think has the highest data vulnerability and, why not?, the country you would go on a trip next year. I sincerely hope they don’t match.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
The fight of the century: Data Center VS Cloud! Let’s go!
In this blog we have always been eager for fights or competitions of whatever we please. We are like that, like fierce pokemon trainers who want to finally find out who has the greatest capabilities to win. They have praised us for it, they have hated us for it, but it does not matter, the point here is not having fun, but to give the most complete information about the litigants and the battle, so that the user can see closely who they should choose in the future. For all these reasons, today we have in our very own ring Data Center VS Cloud.
How to choose between a data center and Cloud storage?
When the decisive moment arrives, a company must decide about what it intends to do with data storage: “Do we send everything to the Cloud? Do we store our data right here, in our datacenter? Do we outsource them to a professional data center? After all, there are multiple factors, financial elements, the logistics of the company, different clauses and details. A lot of regulation to take into account that has you sweating when it comes to finding the correct answer.
The truth? In this article we are going to expose situations in which data centers beat the Cloud, because, for better or for worse, we are facing a foreseen victory.
Do you need more security?
It is true that the Cloud is no longer sooo in cloud 9 and both the Cloud and its computing and data storage solutions have made great progress in recent times. In fact, they offer a great infrastructure with protected access and the add-on of pay-as-you-go. But if you really want to have the appropriate protocols, compliance and security software, well, your data can be better and more secure in a data storage center, external or at home. There are many companies that offer external, professional and guaranteed data storage, which certifies that the information is your exclusive property and that the data will always be kept safe.
As we have said, storage security in IT Clouds is not as weak as some leaks of private pictures of celebrities have led us to believe. What’s more, the Cloud is often the first choice for a large number of companies, but there are certain nuances in Cloud storage that lead others to choose data centers. And there is a certain lack of control when choosing Cloud storage: problems with shared servers, lack of automatic backups, data leaks, fraudulent devices, vulnerable storage gateways, etc.
Combining infrastructure and profitability
If there is something that the clouds look like from the mainland, it is comfort and convenience, and so does the Cloud, something comfortable, agile… However, user fees can end up being quite expensive, depending on the type of services that one might need. An on-premise data center, in your own facilities, can also be one of the most expensive options, in addition that to manage it you must have a good security and IT team that takes care of regular updates and keeps it operational and always ready.
External storage might be the middle ground. Your own space within a data center or as part of a colocation package. If you think about it, you get the advantages of the Cloud without having to spend all that money that normally requires hosting data on a local data center. It is a very attractive option, considered by companies that have started getting consolidated and are now in full growth. Something more robust and reliable than the Cloud and without so many problems with the facilities.
Do you handle sensitive customer data?
Do you know when companies make up their minds quickly in this fierce fight between on-premise vs Cloud? When it comes to collecting, saving and using customer data that if leaked, lost or stolen would mean the destruction of their business, the private life of the person who trusted them or the public welfare in general. To give you an idea, Emperor Palpatine would never hang plans for The Death Star in the Cloud. Too risky.
Imagine then companies that compile and safeguard financial, political, medical, institutional, sensitive data… All of them choose to use physical data centers instead of the Cloud. And the same goes for telecommunications or social media companies. Physical centers are not the best thing ever, but the Cloud has proven itself more often to be vulnerable and easier to be violated more times.
You need a Cold Storage Location
When we talk about a Cold Storage Location we mean the storage of data that is completely offline, that is, they are not in the Cloud at all, they do not relate to the Cloud, they do not want the Cloud, they do not know what the Cloud is. Data is stored on safe physical means and then moved off-site in the event of a cataclysm. Like you know, a dana, a volcanic explosion, the Twister hurricane or a robbery attempt. This data storage option is often used by companies that have long-term compliance dates, financial institutions, brands threatened by ransomware attacks… They all see Cold Storage Location as the safest backup plan they can have.
Conclusion: Then, what about it?
Well, if we have to reach some conclusions, it must be said that storage in the Cloud is often convenient and has its place, but, of course, it is not the only option, nor is it the best for many companies. Data centers are the ones that best help companies, provide them with security, scalability and peace of mind. It is also the only alternative for companies looking for Cold Storage Location.
After this brawl, Cloud VS on-premise, you can take more into account the advantages and disadvantages of each one of them and make the best decision for your company and your customers’ data.
¿Quieres conocer mejor qué es lo que Pandora FMS puede ofrecerte? Descúbrelo entrando aquí. Si tienes que monitorizar más de 100 dispositivos también puedes disfrutar de un TRIAL GRATUITO de 30 días de Pandora FMS Enterprise. Instalación en Cloud u On-Premise, ¡¡tú eliges!! Consíguelo aquí.
Por último, recuerda que si cuentas con un número reducido de dispositivos para monitorizar puedes utilizar la versión OpenSource de Pandora FMS. Encuentra más información aquí.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here. If you have to monitor more than 100 devices, you may also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL Installation in Cloud or On-Premise, you choose !! Get it here
Finally, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here.
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook. We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel! Follow our account, we still have a long way to go to match that of Billie Eilish.
We all remember a couple of biblical allegories here. That of the Good Samaritan, that of The Prodigal Son, that of an Aragonese with new Adidas and the trolleybus on line 8… But the one that interests us today is that of the most holy and bethlemite David, preceded by Saul and succeeded by Solomon, who, among his many achievements, managed to defeat the Philistine giant Goliath. And he did so despite their difference in size and strength, which comes close to explaining the potential of micro data centers compared to traditional data centers.
Micro data centers, small but actual beasts
Look in the rear-view mirror, an allegorical rear-view mirror of course, as it is very unlikely that you will find yourself driving while reading this brilliant article. Look in the rear view mirror. Far back on the road is the gray monotony of centralized data centers. Yes, given how new and cool cloud computing is, which companies are currently going for, data centers are becoming, in a subtle way, micro data centers. That is, smaller and more succinct versions of the system, the mechanics and the traditional apparatus.
These “mini versions”, compared to traditional data centers, are built for a different type of workload. In addition, they solve very specific problems that traditional data centers can no longer solve.
Macro qualities of a micro data center
If we go directly to the most common features, the most typical micro data center is around ten servers and one hundred virtual machines. They are autonomous systems that contain the same capabilities as traditional data centers and more. We are talking about refrigeration systems, security systems, humidity sensors and a constant power supply.
I no longer need you to look in the rearview mirror, now look at the front windshield. Due to the global pandemic of Covid-19, remote work or telework has become part of our lives permanently. Well, these micro data centers as small and cute as they come have been created as the ideal proposal for locations of all types. They can be deployed in a higher number of locations and rooms. Even for a rudimentary installation in a classic office, they are the most silent and functional ones.
More benefits of micro data centers
If we had to make an official list of benefits and advantages of our little David, the first thing we would point out, in bold type, is that micro data centers directly empower companies. And they do not do so by magic, they do it, for example, reducing server costs, since they do not require bulky storage, or giving the option to companies to upgrade according to their own needs. This already by itself supposes a substantial difference in costs that will come in handy for the development and growth of companies.
Micro data centers are closer to users, which also translates into a reduction in latency. All of that in addition to how cheap they are compared to traditional data centers.
If you keep on looking ahead the advances are coming, one after the other, like traffic signs that we quickly leave behind with our allegorical ride. Technology companies increasingly have more data to accumulate and are in need of more processing power. Big brands will have no problem, they have the money, but what about small offices, retail areas or even town firms? They more than anyone should take advantage of edge computing and micro data centers to improve their businesses. And not only because they have the strangest and most remote and forsaken locations, but because these micro data centers can run all kinds of security systems, cash registers and other digital systems that are usually needed by small businesses.
Imagine your neighborhood grocer, “Frank, The 6 fingers,” using data analytics to improve marketing. After all, micro data centers only need a comfortable cabinet for cooling. And if we are talking about a small savings bank or common bank, well, they can implement their financial practices making them more efficient with micro data centers. Leaning even towards IT solutions, edge computing, IoT…
But be careful: Micro data centers should not be mistaken with edge computing.
To avoid mistaking them: micro data centers take advantage of edge computing to reach their goal, while edge computing is the one that increases processing power, brings it closer to the data source, speeds up the process of transporting data and improves device performance.
Even if it was this time from 1 Samuel 17: 4-23; 21: 9, David blows up again and knocks out Goliath. Proving that the small can knock down the big one and that we all have a chance in this land of God, at least if we are seasoned enough and have a fighting spirit.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
A Service Level Agreement (SLA) is a document that details the expected level of service guaranteed by a vendor or product. This document generally sets out metrics such as uptime expectations and any payoffs if these levels are not met.
For example, if a provider advertises an uptime of 99.9% and exceeds 43 minutes and 50 seconds of service downtime, technically the SLA has been breached and the customer may be entitled to some type of remuneration depending on the agreement.
What do we want SLAs for?
A Service Level Agreement (SLA) specifies the quality of a service. It is a way of defining the limit of failures or times in which the response to a service is measured. Each service measures its quality in a different way, but in all cases it refers to times, and therefore it can be measured.
For example, if you worked in a restaurant, you would define your customer service SLA with several parameters:
Maximum time since a customer sits at the table and is served by a waiter.
Maximum time since you order the drink and it is served to you.
Maximum time since requesting the bill and paying.
Suppose that in our restaurant, we consider that the most important thing is the initial attention, and that no more than 60 seconds can go by, from when you sit down to when you are served. If we had a fully sensorized business with IoT technology, we could measure the time from when the customer sits at a table until a waiter approaches the table.
That way, we could measure the number of times each waiter manages to serve a customer in the established time. The way to do it can be more or less simple, but let’s keep it simple, suppose that every time they do it in less than 60 seconds they comply and when they do not make it, they do not comply. So if out of ten clients they serve in an hour, they fail only with two, they would be 80% compliant. We could make the average of their entire work day and thus easily compare different employees to find out which one has more “quality” in the metric of “serving a customer when they sit down.”
If we use a monitoring system, we could notify their manager every time that the overall quality of the service drops below 80% and by generating automatic reports, we could each month reward those with the best service compliance percentage and take measures (or fire) for those who are doing it worst.
One of the most important functions of monitoring systems is to measure. And measuring service compliance is essential if we care about quality. Whether we are on the provider side or on the client side.
If you are paying for a service, wouldn’t you like to check that you are actually getting what you pay for?
Sometimes we do well not to trust the measurements of others, and it is necessary to check it for “ourselves.” For this, monitoring tools such as Pandora FMS are essential.
What is the «uptime» or activity time?
Uptime is the amount of time that a service is available and operational. It is generally the most important metric for a website, online service, or web-based provider. Sometimes uptime is mistaken with SLA, but uptime is nothing more than a very common metric in online services that is used to measure SLAs, not an SLA, which as we have seen before is something much broader.
The trade-off is downtime – the amount of time a service is unavailable.
Uptime is usually expressed as a percentage, such as “99.9%”, over a specified period of time (usually one month). For example, an uptime of 99.9% equals 43 minutes and 50 seconds of inactivity.
What are the typical metrics of a supplier?
Those that are agreed between the supplier and the client. Each service will have its own metrics and indicators. Thus, in our Monitoring as a Service (MAAS) we can establish several parameters to be measured, among others, let’s see some of them to better understand how to «measure the service quality» through SLA:
Minimum response time to a new incident, 1 hr in standard service.
Critical incident resolution time: 6 hours in standard service.
Service availability time, 99.932% in the standard service.
When we talk about a time percentage, it generally refers to the annual calculation, so 99.932% corresponds to a total of 5h 57m 38s of service shutdown in a year. We can use our SLA calculator (below to test other percentages).
On the contrary, 1hr would be the inverse calculation, and for this we can use online tools such as uptime.is. By using it we will get that six hours would correspond to:
Weekly reporting: 99.405 %
Monthly reporting: 99.863 %
Quarterly reporting: 99.954 %
Yearly reporting: 99.989 %
Similarly to the initial waiter example, we can measure compliance with a support SLA by measuring the sum of several factors, if all are met, we are meeting the SLA, otherwise we’re not. This is how Pandora ITSM measures it, the helpdesk component integrated in Pandora FMS. Pandora FMS clients use Pandora ITSM for support, and thanks to it we can ensure that we attend to client requests on time.
How to calculate the service SLA time?
Use our online calculator to calculate a service downtime. For example, test 99.99% to see the maximum downtime for a day, a month, or the entire year.
How can Pandora FMS help with SLAs?
Pandora FMS has different tools to exhaustively control the SLAs of your client/supplier. You have SLA reports segmented by hours, days or weeks. That way you can visually assess where the defaults are.
This is an example of an SLA report in a custom time range (one month) with bands by ranges of a few minutes.
There are reports prepared to show the case of information sources with backup so that you can find out the availability of the service from the customer’s point of view and from the internal point of view:
This is an example of a monthly SLA view with detail by hours and days:
This is an example of a monthly SLA report view with a weekly view and daily detail:
This is an example of an SLA report view by months, with simple views by days:
Service monitoring
One of the most advanced functions of Pandora FMS is monitoring services with Pandora FMS. It is used to continuously monitor the status of a service, which, as we have seen at the beginning, is made up of a set of indicators or metrics. This service often has a series of dependencies and weightings (there are things more important than others) and all services have a certain tolerance or margin, especially if they are made up of many elements and some of these are redundant.
The best example is a cluster, where if you have ten servers, you know that the system works perfectly with seven of them. So the service as such can be operational with one, two or up to three machines failing.
In other cases, a service may have non-critical elements, which are part of the service and that we want to control, even if the service is not affected:
One of the advantages of service monitoring is that you can easily get the route to failure, literally being able to find the needle in the haystack. When you talk about technology, the source of a problem can be somewhat tiny compared to the amount of data you receive. Services help us determine the source of the problem and isolate ourselves from informational noise. They also allow to monitor the degree of service compliance in real time and take action before the quality of the service for a customer is affected.
On the way to perfecting its services, Pandora FMS launches one of the most advanced and complete solutions in its history as monitoring software: Monitoring as a Service (MaaS).
As we all know by now, Pandora FMS is a software for network monitoring that, among many other possibilities, allows visually monitoring the status and performance of several parameters from different operating systems (servers, applications, hardware systems, firewalls, proxies, databases, web servers, routers…). It can also be deployed on almost any operating system and has remote monitoring (WMI, SNMP, TCP, UDP, ICMP, HTTP …), etc.
But what concerns us this time is to see how Pandora FMS once again surpasses itself with Monitoring as a Service. Because yes!, it is time for you to have Pandora FMS ready to use and ready to cover all of your needs. Avoid, from now on, wasting valuable resources on installation, maintenance and operation, MaaS is fully intended as a flexible and easy-to-understand subscription model.
Monitoring as a Service (MaaS) advantages
In order not to roughly explain it in a rush, we better go into detail and list some of the most important advantages of Monitoring as a Service (MaaS).
With Monitoring as a Service, you do not need to invest in an operations center, or in an internal team of engineers to manage monitoring. That’s it, without capital expenditures (capex) or operating expenditures (opex).
With Pandora FMS as a Service monitoring you may accelerate the time to obtain values.
Available 24/7, access it anytime, anywhere. There are no downtimes associated with monitoring. Wonderful and available 24/7.
Generate alerts based on specific business conditions and discover the easy integration of this service with business processes.
Important: Permanent security. All information is protected, monitored and complies with GDPR.
Operation services, we can operate for you, saving resources and optimizing startup times.
Custom integrations, with Pandora FMS specialist consultants at your disposal.
Deployment projects, to support specialized resources wherever you need them.
Here is our proposal in more detail
What does this mean for your company or business?
Going straight to the point, Monitoring as a service (MaaS) provides unlimited scalability and instant access from anywhere and gets rid of worrying about maintaining storage, servers, backups, and software updates.
It is up to you to discover, right away, how the digital transformation of all business processes makes Monitoring as a Service (MaaS) an essential activity to boost the productivity of your company.
Some frequently asked questions about the solution (FAQ)
Of course, given such a technological scoop, you may have some doubts about the subject. Here we answer several of the most frequent questions that we were asked.
What agent limit does the service have? Does it have an alert or storage limit?
There is no agent limit, although the service starts from 100 agents. There is no limit on alerts or disk storage.
How long is history data stored?
45 days maximum. However, you may optionally hire a history data retention system to store data for up to two years.
What is the service availability? What happens if it crashes on a weekend?
The service availability SLA is 99.726% in Basic service, 99.932% in Standard service and 99.954% in Advanced service. In short, we will make sure it is never down.
In which country are the servers located?
We have several locations, to comply with different legislations, such as GDPR (EU), GPA (UK), CBPR (APEC) and CPA (California).
What security does the service offer?
In addition to an availability SLA guaranteed by contract, our servers are exclusive for each client, we have 24/7 monitoring, and our own system security. Of course, backup is included in the service.
How much does the service cost?
You pay a fee per month, which is calculated on the number of agents you are using that month. So if you increase the number of agents in a certain month, you will pay more that month. However, if you decrease the number of agents, you will pay less. There are also some start-up costs for the service and also some optional packages, such as if you want our engineers to develop a custom integration or help you deploy monitoring in your internal systems.
How is it billed?
Quarterly or semi-annually, with monthly cost calculations, so you can plan growth and costs without surprises.
What does the service include?
From Pandora FMS Enterprise license to the operating system, database management, system optimization, maintenance, updates, emergency patches, integration with Telegram and SMS sending, backup and recovery, preventive maintenance, environment security and any other technical task that may take up operating time. You will only have to operate with Pandora FMS.
What is the difference between Basic, Standard and Advanced services?
With the basic service, if you want to make a report or configure an alert, you can do it directly, without worrying about installing, configuring or parameterizing anything. In the Standard and Advanced service you can ask us to do it for you and we will be happy to do so, the same applies for building remote plugins, creating reports, users, policies, graphs or any other administrative Pandora FMS task. In the Standard and Advanced services you will have a number of hours of service each month for any request you may make, and our technical team will be at your disposal. Our technical team will be at your complete disposal.
What are the service hours?
Full office hours (from 9 AM to 6 PM) in America and Europe. From San Francisco to Moscow.
If you can no longer handle the intrigue and want to see how far the possibilities of Monitoring as a service go, you may now hire the solution through this link.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
Is Cybersecurity Awareness Month the event of the year?
Welcome back to the incredible and majestic Pandora FMS blog. In today’s post, we are going to deal with an event belonging to the month of October, that depressing month in which we become aware of fall, it is colder and someone keeps cutting short our daylight hours. If April is the month of flowers and November the month of the male mustache for testicular cancer, October is the Cybersecurity Awareness Month.
What is Cybersecurity Awareness Month?
Cybersecurity Awareness Month, which is commemorated every October, was created between the United States government and national industry to ensure that everyone had the necessary resources to stay safe and secure online.
Since its inception, under the supervision of the US Department of Homeland Security and the National Cyber Security Alliance, Cybersecurity Awareness Month has grown stronger and more widespread, reaching out to millions of users and businesses, and all types of corporations and institutions. Today, in 2021, it continues to make an impact, and not only in its country of origin, it already does around the world because, who would not join the cause of feeling more protected in these times we live in?
Cybersecurity Awareness Month: Origins
As we’ve explained, the National Cyber Security Alliance and the US Department of Homeland Security launched Cybersecurity Awareness Month in October as a shared effort to help Americans stay safe online. And they did it a few of years ago, at least all those that distance us from October 2004.
When a baby starts to walk, the first steps are short and simple. So were the early Cybersecurity Awareness Month awareness efforts. Most of them focused on giving recommendations on how to update the antivirus, at least twice a year. But little by little they increased their ambitions, their reach and their participation. For example, launching complex campaigns in the industry, involving clients, NGOs and even university campuses.
The organizers made it clear in these years that responsibility for cybersecurity problems is fully shared. From large companies to small users with their battered laptops, all of us must protect our digital treasures and always keep them under supervision.
The European Cybersecurity Month (ECSM)
What is European Cybersecurity Month? The European Cybersecurity Month works, like the American Cybersecurity Awareness Month, as an annual campaign devoted to promoting cybersecurity among users, companies and institutions. The only difference is that the European Cybersecurity Month is promoted by the European Union.
Throughout the month of October, safety information is provided online and awareness is raised through good practices. Activities are carried out around the entire continent: conferences, workshops, seminars, presentations, etc. Everything in order to make us finally aware of digital hygiene.
We must thank the European Union Agency for Cybersecurity (ENISA) and the European Commission for the fruitful month of European Cybersecurity Month, which, of course, has the full support of the EU Member States.
Some events of Cybersecurity Awareness Month
Like the Homecoming Week for high schools, Cybersecurity Awareness Month is also divided into different segments. We are going to list those established by the National Cybersecurity Alliance this year, 2021.
First week
The first week will be themed on creating strong passwords, using multi-factor authentication, backing up data, and updating software.
Only that way will we be able to realize how dependent we are on technology and reconsider the amount of personal and commercial data that we treasure on platforms located on the Internet. There, at the hand of any cybercriminal.
Second week
The motto? “You must be careful with emails, text messages, and chats opened by strangers and incognitos.” You are just one click away from a suspicious email, link, or attachment, to bother the hell out of you. Indeed phishing and digital scams in general have been on the rise since we began with this pandemic. Since we have the damn COVID among us, phishing attacks represent more than 80% of reported security incidents.
Third week
The third week of Cybersecurity Awareness Month will be focused on supporting, inspiring and applauding students who have chosen, or want to choose, a university career focused on cybersecurity. Whether they are teenagers, adults or confused kids who want to change fields of study. Cybersecurity is cool, youngster! It is fully growing and has space and credits for everyone!
Fourth week
This week we will try to make security a priority for companies more than ever. Incorporate security in products, processes, tools… Promote cybersecurity in employees and teams. Get cybersecurity in the minds of department heads until they themselves celebrate the vanguards and news of this discipline on a daily basis.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
Together we check out the key concepts of systems and networks
In the middle of the information century, who has not surfed the Internet or used a computer, be it a desktop or a laptop? But do you really know what a computer is and what it is made of? and what about the Internet?
It is important to know at least the most superficial layer of something as important as computer systems and networks, and therefore, we are going to talk about the key concepts of these two topics.
A computer system is a device made up of the union of hardware and software, which allows the use of this system by a person, whether qualified or not, that depends on the purpose of the system.
But, what does “hardware” and “software” mean? Let’s talk a little more about it.
You can define as hardware the set of physical components that make up a computer system. We are going to define the main components of a computer system, although there are a few more:
Processor: It is the component in charge of executing all the system programs. It is in turn made up of one or more CPUs.
RAM memory: This component stores the data and instructions executed by the CPUs and other system components.
Hard Drives: Information and content are stored here in computer systems.
Motherboard: It is the component where the others are located, and works as a bridge for communication between them.
Well, now that we have a basic understanding of what hardware is, we move on to software.
Software are all the programs that run on a computer system, among which you may differentiate three types of software:
System Software: It is responsible for the proper functioning of the operating system and hardware in general, such as device drivers.
Programming software: They are tools whose sole purpose is the development of new software.
Application software: It is any program designed to perform one or more specific tasks, for example video games or applications designed for business or education.
We already know what a computer system is, but without communication with the outside we are not making the most out of the potential that these systems have (which is a lot), so we decided to connect it to that abstract site full of information and services: the ‘Internet’.
Everyone knows the term “Internet”, but do we know what the “Internet” is?
We could say that the Internet is the great global network that unites all existing devices, allowing communication between all of them from anywhere on the planet. In turn, this large network is made up of other smaller networks, such as those of a country, city, neighborhood, etc.
Mainly, we distinguish three types of networks:
LAN: It is the smallest network, a local area network, such as the one in work areas or the one you have at home.
MAN: It is a somewhat larger network, being able to cover from neighborhoods to cities. They can also be the networks used by large companies for communication between their different offices.
WAN: It is a network that connects countries or even continents to each other, not devices. We can say that the Internet is the ultimate WAN network.
Ok, we already know what the Internet is made of. But, how do devices communicate on these networks? There are systems used to identify each computer on the network, known as IP addresses. An IP address is, basically, the ID or identifier of a device, so it is unique and unrepeatable.
At the beginning, when the idea of an IP address was created, there were only a few dozen computers in the whole world, and this, as we already know, has gotten quite out of control since then. As a result of this increase, they decided to come up with a new concept, known as DNS (for its acronym Domain Name System).
What the DNS protocol does is, basically, translate the domain name that we enter, either in the web browser or in any other program, and convert it into an IP address, with which it communicates with the destination. Of course, all domain names are stored on DNS servers, scattered around the world to avoid connection overload, and to avoid slow name resolutions.
There are a large number of protocols, each with a different purpose. These protocols are grouped in layers, such as application, transport, Internet or access to the network, according to the TCP/IP model. But, that’s not all. We still lack another important concept in relation to communications between devices, what we know as “ports” of a computer system.
Imagine a road, if all the traffic that wants to enter a city only had a single road, what would happen? Well, the same thing happens in computing, and that is why these virtual ports exist.
These ports range from 0 to 65535, but the first 1024 are reserved for “important” protocols, such as the DNS protocol, which we have mentioned above, belonging to the application layer and that uses port 53 for both UDP and TCP connections.
TCP and UDP are two protocols belonging to the transport layer, whose main difference is that the TCP protocol is connection-oriented. That is, the TCP protocol makes sure that the data reaches its destination, while the UDP protocol sends the data, faster but less securely. This data may even not arrive or at least not fully arrive.
The protocols for web connections or HTTP/HTTPS, both belong to the application layer. Depending on which one you choose, it uses a different port. That is, for HTTP connections, port 80/TCP is used, although it is deprecated due to its lack of security, so the standard has become HTTPS connections, which use port 443/TCP and include a security layer based on SSL/TLS.
Connections made through safe channels or SSH, also from the application layer, use port 22/TCP, and thus we could continue with lots of other protocols.
Of course, these ports are a standard in the systems that receive the requests, the client that initiates the request can use any port that is not reserved to send the request and receive this data. As you can see, this is much easier to communicate with servers, although they can also modify their default ports, but the normal thing is that they do not do so if they want to provide a public service.
Finally, we are going to talk about a concept that, due to the pandemic, is the order of the day: the VPN.
As its name indicates (Virtual Private Network), we can define a VPN as a network “tunnel” that is created between client and server, where data are fully encrypted and sent through the Internet. The common use of VPNs is anonymity on the network, since the IP that is exposed is that of the VPN server, or, also, to be able to visit pages that cannot be accessed from the source country.
In the business environment, this tunnel allows direct communication between the client device with any other device in the network of that server, which allows access to an environment as if we were physically in the office of our company. It also allows access control and registration, which otherwise could not be done.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
From now on, let us add “Supernet” into our vocabulary. Learn more!
A set of computers and/or computer equipment connected to each other, and that can exchange data and information, all of those make up a network. The Internet is the network of networks. We could even think that the Internet is the “Supernet”, but we have to tread carefully when using frequently debated terms in computing… For that reason, today, we bring to the fore the term “Supernet” (or Supernetwork), of course always taking an approach from a monitoring perspective.
* Warning: what I write below is my way of looking at things from a practical and sincere point of view. This article only endorses me and, in any case, this entry should be read taking into account the learning approach, it does not intend to be official in any way. That said, let’s start from the basics, which is not the same as starting from scratch.
Terms: Supernet and Supernetting
If we have a network and we buy a new computer, we say that “we add it to a network.” If we have a supernet, then “we add it to a supernet.” We even have a specific verb for it. It is very common to use the term Supernetting; however the following terms are also valid (but less used):
Prefix aggregation
Route aggregation
Route summarization
If we get even more specific, technically we will find differences but for the purposes of this post we will deal with it the same way… Do you think it is daring on my part? Well, there is more!
Request for comments
Although we can go a lot further back in time, the Internet was born in the United States of America, originally called Arpanet, in the late 1960s. A technological predecessor could be the landline from which many of the concepts are born, used when planning the “network of networks”. In fact, the wiring itself, the colors that identify the pairs, are very similar, at a physical or hardware level. This includes the similarities in switched connections (or circuit switching). But, obviously, the Internet and data transport in a digital way ended up completely absorbing telephony.
But the Internet needed more than the physical and conceptual foundation or sustenance of the great American telephone companies. Moreover, October 1969 is marked as the birth of the Internet since the first connection between two computers was made… And it was simply that, since it was not yet a common computer network.
The Internet was born, in my opinion, when pioneer Dr. Steve D. Crocker published the first issue of Request for Comment (RFC) on April 7, 1969. In Issue 6 (RFC 6), Steve Crocker recounts his conversation with Bob Kahn about code conversion for data exchange. RFC 11 publishes the connection implementation in the FAT operating system (yes, that’s what it was called), and I fervently believe that this, published in August 1969, is what enabled the feat performed in October of that same year.
Based on this knowledge base, the RFCs were born: gathering a group of people in their twenties who moved among different universities sharing knowledge and cementing concepts, something that we now do by email… In fact, RFC 733 (1977) outlines this technology and the standard for the email is published in RFC 822 (year 1982).
RFCs grew decade after decade: in 1992, RFC 1338 “Supernetting: an Address Assignment and Aggregation Strategy” was published for information purposes. Yes, at first the supernet was just a mere advertisement, not a protocol, and not even a standard.
Just the following year, in 1993, RFC 1518 “breaks” the paradigm of networks by classes. While class A networks allow millions of IP addresses, the next step – class B networks – only allowed 65 thousand IP addresses: between the two of them the “waste” of IP addresses is very high.
For that reason, the Classless Inter-Domain Routing (better known as CIDR) was born, which is an extension of the original IPv4 addressing system that allows more efficient address allocation. The original class-based method used fixed fields for network identifiers, which was wasteful as I said earlier: most organizations that are assigned those addresses (class A and class B networks) never intended to put so many devices on the Internet.
As additional information, this is the origin of CIDR notation, the suffix that accompanies an IP address (there are 32 bits in an IPv4 address, four octets separated by periods) and that allows describing or narrowing down a range of them. For example, for /20 it allows 4096 IP addresses, for /21 2048 IP addresses and so on, as well as all the way around (all powers of base 2, this is important for a supernet as we will see later). All these numbers can be obtained using the IP address calculator included in Pandora FMS. You may also find many of these calculators online, each with its own style, shapes and colors to present the same data.
Flexible like Pandora FMS
CIDR thus changed the fixed fields to variable length fields and this allowed to assign IP addresses better, and in a more refined way. CIDR IP addresses include a number that indicates how the address is divided between networks and hosts.
For example, in the CIDR address 201.249.0.0/19 the /19 indicates that the first 19 bits are used for the identification of the network and the remaining 13 are used for host identification.
The main purpose of a supernet is to decrease the size of the route table of routers. For example, instead of a router having 8 individual routes, it may have a single route aggregated from these 8 individual routes. This saves memory and processing resources on the routing devices, thus requiring less space to store their route table and less processing power to search the route table. It also provides stability in networks because fluctuations can be isolated, that is, in one part of the network they do not spread to all parts of the network.
Supernetting and Pandora FMS
From Pandora FMS version NG 731 IPAM was included (abbreviation of Internet Protocol Address Management) which allows to manage, discover, diagnose and monitor hundreds of IP addresses.
Within this feature, the supernet, subnets and even virtual private networks (VLAN) are included, all integrated, with the option to export data in CSV files. Unlike creating VLANs, we can only create supernets manually using IPAM. For that, you have to configure, with the necessary parameters, each of the supernets that you want to have, and later add networks already managed with IPAM that may belong to a VLAN. Although it is a manual process from version NG 758, it includes the ability to quickly addy our data from files in CSV format.
To finish off this post, let’s see what the rules that operate a supernet are.
Supernet Rules
Apart from good practices in network configuration, the established rules must always be followed and enforced to avoid chaos reigning.
The rules for creating supernets are as follows:
Networks must be contiguous or sequential.
The number of networks to add must be a multiple of two or “base two”.
And the rule that is somewhat more complicated: compare the value of the first octet not common from the first block of IP addresses (the smallest) of the list of networks to add against the number of networks to add (see previous point). The value of the first non-common octet must be zero or a multiple of the number of networks to be added.
Before finishing, remember Pandora FMS is a flexible monitoring software, capable of monitoring devices, infrastructures, applications, services and business processes.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here: https://pandorafms.com/
If you have more than 100 devices to monitor, you may contact us through the form: https://pandorafms.com/contact/
Also, remember that if your monitoring needs are more limited, you have Pandora FMS OpenSource version available. Learn more here: https://pandorafms.org/
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
Today we will talk about one of the most versatile elements that Pandora FMS Enterprise offers us for monitoring distributed environments, the Satellite server. It will allow you to monitor different networks remotely, without the need to have connectivity directly from the monitoring environment with the computers that make it up. We will describe the typical case of companies that have central headquarters and remote offices, the different things we may find and how the satellite server can help us deploy efficient monitoring in an economic, fast and simple way.
Standard monitoring types
Before getting into the description of the case, let’s remember how monitoring works overall with Pandora FMS. There are two basic types of monitoring, local monitoring and remote monitoring.
The first, which we call local, consists of installing a small software on your devices (servers, mobiles, workstations, etc.) which we call monitoring agent. Agents are in charge of collecting the metrics locally on the machine, packaging them and sending them to the server. In this type of monitoring, communication goes from the agent (monitored device) to the server in a defined time interval, so the server does not have to interrogate the device, it just has an open port through which information is received, and any device that can reach that port will be able to send its data, so communication is “simple”, you just need to make sure that your monitoring server is exposed to all of your agents.
The second form of monitoring is what we call remote monitoring. Remote monitoring means that the monitoring server interrogates the agent to monitor through some protocol (icmp, tcp, snmp, http, wmi, etc). This could go from a simple ping to connecting to the api of a complex tool, such as vsphere, to retrieve information from all virtual machines, esx and datastores running in this environment and their corresponding metrics.
This type of monitoring opens the doors to being able to retrieve large amounts of data requiring little configuration and without the need to install any extra software on the devices, which is wonderful, but it also entails other inconveniences, such as having to guarantee connectivity from the monitoring server to each of the elements to be monitored, taking into account the security criteria to open these communications.
When you have a single headquarters of any size, this is not usually a problem, since you might usually have your devices and applications concentrated in the same place and communications management between environments is usually easier, this situation becomes complicated when you have more than one headquarters or small remote offices.
Description of a distributed environment
Let’s picture a distributed architecture with a headquarters where you have most of your applications and IT equipment, but you also have smaller sites that also have their equipment and applications. We have examples of this infrastructure, highly distributed, in environments like restaurant franchises, supermarkets, banks, retail stores, pharmacies, insurance companies, etc. Where they usually have powerful, well-managed data centers at headquarters, but remote sites lack the space or staff to maintain servers. Most of the time, there are not even permanent technical support staff for the equipment in these locations, so implementing monitoring can be challenging.
If some technology is implemented such as a site-to-site vpn, a sd-wan or dedicated communication between your sites, there is hardly any problem, you may have your monitoring environment at your headquarters and from there “attack” your remote devices. Well, the problem is that these solutions are expensive and require implementation and management, and if they are not already implemented, their implementation can become very complicated (and expensive). It is in these cases where the satellite server becomes essential, since it combines the versatility of remote monitoring with the communication behavior of local monitoring.
Using the Satellite Server
The Satellite Server consists of software that will be in charge of doing the remote checks on your network. Let’s say that in our restaurant, for example, it will do network scans, monitor each of the restaurant’s devices through different protocols, store these data and then pack them and send them to the main Pandora FMS server as if it were a local agent, so the headquarters/remote headquarters communication is simplified. You just have to make sure that a single device, the Satellite Server, can communicate with Pandora FMS server, in that sense from the remote headquarters to the main headquarters to send the data packets. Remote checks will always be done from within the local network without the need to expose any of the services, devices or applications of your remote headquarters.
Even if you want to make use of hybrid monitoring (local and remote monitoring) in your remote headquarters, you may install software agents on your devices and point them to our satellite so that it becomes the single delivery point between your remote headquarters and your headquarters.
In addition, the Satellite Server has remote configuration, so once deployed, it can be managed and configured from your main monitoring environment, being able to add new metrics, alert systems, policies and more configurations without having to access your remote headquarters, all from your Pandora FMS web console at your headquarters.
Regarding its deployment, the Satellite Server is a very light software especially compared to a full Pandora FMS installation, so the hardware requirements for monitoring remote sites are really low, it can even be deployed in a Raspberry Pi, which is a very cheap and compact device, or failing that, you may use any of the resources that are already deployed at the headquarters, such as a data server, to deploy your Satellite.
As you can see, monitoring remote sites using the satellite server simplifies a huge deal the configuration necessary for monitoring, helping you save money and implementation time that without a tool like this would be a lot higher and more complex.
Today we discussed only one of the typical cases, which is one of the most common ones, to describe the performance and the usefulness of a satellite server, but it is not only valid for remote locations, it is useful in many other ways, such as load balancing, making checks at the same point from different locations (very useful in monitoring web pages) or even for monitoring complex environments such as Kubernetes or Openshift, where many of the services are not exposed to the outside, such as databases or backend services, and that you could monitor if you deployed a pod with the satellite within the network and directly attacking these services, for example.
If you want to learn more about the Satellite Server feature, how to install and configure it, or want to find out more Pandora FMS specific features, stay tuned to our blog and do not hesitate to visit our youtube channel, where you may find tutorials, workshops and a lot of content devoted to this and many other topics related to monitoring.
Do you already know what a web firewall is? Let us tell you about it.
There’s something that humans and machines have in common, and no, it’s not the disappointment suffered by the final season of Game of Thrones, or, well, at least not only that. What we have in common is that we need protection. You know, animals need it too, and plants, but if you’ve gotten this far, it seems that you’re interested in computers, networks and all these pretty modern “geek” things, so today we’ll talk about that kind of protection.
Just as you would protect your dog, your ficus or yourself, you have to protect your computer. The world is a place full of dangers and risks and the Internet is not far behind. It’s like in Crystal Jungle, only instead of people carrying guns around, we’ll find hunched over users willing to collect information from your computer, networks, and troll you in any possible way.
Remember: “The night is dark and harbors errors”. And horrors too.
The Internet also hosts them, so to protect our beloved computer, we must make use of a “Web Firewall”. What is this “Web Firewall”?
A “Web Firewall” is a system that is intended to protect our private network and block unauthorized access or attacks from other networks. In turn, it allows incoming and outgoing traffic between computers on the same network. That is, it is like the door of your house, or, worth the analogy, a half-open blind that only lets in a specific amount of breeze according to your personal comfort.
But not only that, it can be our beloved ally, protector of what we love the most, since through configurations you may limit, encrypt or decrypt this traffic. Here’s another lucid analogy: Maybe, in your day to day, you have to go to a clandestine meeting whose members no one should know. It is similar with your computer, you may encrypt your traffic so that you cannot access the most relevant data.
The web firewall, capable of such feats, can be implemented in hardware or software. If it’s well configured, it will be an advantage when it comes to protecting your networks, so it’s vitally important to understand how it works and how you may get the most out of it.
How does a web firewall work?
Outline of a firewall on a computer network
It is usually located at the junction point between two networks. Each network or computer can have its own firewall. This can limit the consequences of an attack, as you can prevent damage from spreading from one network to another. The sooner the spread of evil is tackled, the better.
The essential thing that you must know for the operation of the web firewall is that the totality of information and traffic that goes through our router and that is transmitted between networks is analyzed by it. If the traffic complies with the rules you have configured for it, it can enter or leave your network. If the traffic does not meet those certain rules, then it will be blocked from reaching its destination.
There are several methods by which you may filter the traffic of the firewall, for example, configuring it as you please. Remember that a good firewall configuration is paramount. If the lock on your front door was badly designed and anyone could open it, bad people could get in and steal, this is the same thing.
Let’s take a look at some of the filtering methods we’ve been provided by our dear friend, the Web Firewall.
Traffic filtering methods
Firewall policies: They allow you to block certain types of network traffic.
Anti-spam firewall: It protects against spam, phishing, etc.
Antivirus firewall: It protects the internal network against attacks that come from the Internet or wan.
Content filtering: It allows you to block some types of web content.
WAP Managed Service: It allows you to control wap devices.
DPI services: It allows you to control specific applications.
There are a few types of firewalls to highlight, these can be software or hardware, and, if we investigate a little more, we will find others that are somewhat more defined.
Types of web firewall
Gateway application level: It applies security mechanisms for specific applications.
Gateway level circuit: It applies security mechanisms when a tcp or udp connection is established.
Packet filtering: At network level as an IP packet filter.
Personal: It is installed as software on your computer.
Using a firewall has lots of advantages. We already discussed some, with lots of examples and tremendous analogies, even so, we are going to list the most obvious ones:
Advantages of using a firewall
It blocks access to computers and/or applications to our networks.
It allows you to control and restrict communications between the parties under your settings.
It optimizes communication between internal network elements, helping to reconfigure security settings.
It establishes reliable perimeters.
Protection of intrusions and private information.
Nothing is perfect, and web firewalls, despite their fiery name, well, they aren’t either. These also have some notable limitations:
Limitations of a web firewall
It cannot protect itself from attacks whose traffic does not go through it.
It cannot protect threats made by insider attacks or negligent users.
It does not protect against service security flaws and protocols whose traffic is allowed.
It cannot protect against attacks on the internal network through files or software.
There are many firewall systems, if we use Linux, the one commonly used is Iptables. Yes, it sounds weird, we don’t like weird sounding things and since we don’t like weird things we use the firewall… Hmm… before entering a self-destructive paradox, we will try to understand what this“Iptables” is through a simple explanation.
What is Iptables?
Linux has a firewall system included in its kernel called Iptables, although its configuration can be a bit complex. Its default configuration is to allow everything to enter and exit.
With a suitable Iptables configuration you will be able to filter which packages, data or information you want to enter and which ones you do not. Just like the previous example about the inputs.
To work with Iptables you need administrative permissions so you will have to use sudo. You will have to choose wisely what you let in and what not, and, for this, an adequate knowledge of the commands that you can use in this system is necessary. The following examples are only intended to teach a basic configuration to understand the logic of the web firewall, but for a more correct and complex configuration, I recommend adding information by searching the Internet, specialized books or colleagues in the world.
Some commands to understand Iptables
sudo iptables -P INPUT DROP
-P = Anyone who wants to access the computer INPUT = We ignore it DROP = We ignore it
With this command nobody will be able to enter your computer, in fact you neither… so it is not the most appropriate one.
sudo iptables -A INPUT -i lo -j ACCEPT sudo iptables -A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
The first line tells us that our own computer (lo = localhost) can do whatever it wants.
Before, we said that this was like a house, if we have siblings, parents or children and they leave, we want them to be able to get in again, right? Well that’s what we do with the second line, all the connections that come out of our computer will be allowed by Iptables.
sudo iptables -A INPUT -p tcp –dport 80 -j ACCEPT
With this command, anyone can see the websites that our computer has.
-j ACCEPT = Accept or allow -dport 80 = Traffic to port 80 -p tcp = make it tcp -A INPUT = that is incoming
These would be some basic examples of how Iptables works. That is, just to understand the basics of its operation. Like I said, I recommend digging deeper and diving into more information to make an acceptable setup.
And just like Game of Thrones ended, this article also does it, although much better (what a crappy last season), so I only have to say goodbye and wish you to have a good day. AH! And to recommend you to use Pandora FMS, which despite not being a web firewall, is a tool that will also help you protect yourself by collecting information.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
We give you 21 computer security tips for beginners
The Internet is a tool that, without a doubt, offers a great amount of positive aspects in the daily life of our society, like instant communication, easy access to information… among many other benefits. But also it has negative aspects, and one of the big ones is cyberattacks. That is why we give you today 21 computer security tips for beginners!
Although these attacks are usually aimed at companies, governments, celebrities and in general, targets with important information or involving great monetary value, common people and their domestic devices are not completely free from this problem. And you may think, “ok, but how are they going to attack me, a common person, with no fame and no money?”. Well, regardless of how little you may have, there is always going to be someone who may want to try to take it away, so we suggest you to try to protect yourself as much as possible in this “cyberworld.”
And to make it easier for you, we are going to give you twenty-one computer security tips to protect you against possible malicious cyberattacks:
1. Do not share personal information on social networks. Such as your address, phone, ID… Oddly enough, there are people who do it…
2. Free WiFi? It may sound good, but it could be a trap. It is not advisable to access websites using “sensitive” data on public networks.
3. Passwords. Yes, we know that we’re very lazy to change them every once in a while and that it is very easy to set the same for all of your accounts so that you remember it quickly. But think about this, if they manage to enter your Facebook account, where else can they get into with the same password?
4. Sharing is living? It depends on what, and since passwords are something very personal and sensitive, you shouldn’t share them with anyone other than you, not even your alter ego.
5. And speaking of passwords, do you still have the default password that came with your router? I think it’s about time you update it.
6. Beware of mails. Have you received an email from your bank about an unauthorized change in your account and it asks you to enter your credentials? Very suspicious… If in doubt, contact your bank by phone before rushing! Look carefully at the sender, hover over the URL and check which website the link redirects you to, check if they refer to you by name or by “Dear customer”. In addition, most of them tend to have spelling errors. In general, we suggest you to exercise caution with everything that reaches your mailbox.
If you want to know more about the topic, you can search for “phishing” in your browser.
7. Giveaways you didn’t participate in. There are an unlimited number of scams on the Internet, and you’ve probably come across more than one “You’re our 1000th visitor and you’ve won an iPhone!” Well, it’s clear that this is a scam, and in case you weren’t quite sure, we’re here to confirm it.
8. Recommendation for gamers. Although it is very “cool” to have at your disposal all the games on the market without paying a dime, you should do a little research into the reliability of that succulent pirated gaming website before downloading and installing anything on your computer just because.
9. This one for the not so “gamers” The same thing that we have discussed above applies to the rest of “things” on the Internet. That is, applications, programs, movies…
10. And since we mention programs… Keep your software up to date, or at least, don’t delay too long updating to the latest version developed by developers, as they always tend to add features, bug fixes and, most importantly, security patches.
11. Clean up!. And I don’t mean cleaning your house or your room, I mean your computer. Every program, application or game you have installed is a possible security breach, so consider uninstalling everything you don’t use. And by the way, empty the recycle bin, man!
12. Online shopping. Whenever you go to buy something online make sure that the website has a security certificate, known as HTTPS. You’ll recognize it by the “little lock” to the left of the URL. You can also use payment methods such as PayPal before entering your bank details to make the payment.
13. The Firewall. It is an indispensable element in terms of security for your computer, since it is the one that is responsible for rejecting all connections that are not allowed in its parameters.
14. Antivirus. Another element, although less essential but always recommended, is to have an antivirus. In Windows 10, Microsoft Defender is installed by default, which is a good remedy to fight against most malicious programs, although if you go for some other of your liking, the important thing is to always keep it active.
15. Alexa, what time is it? Lately it is quite trendy to have a smart device at home but… do you know that every device connected to the internet is “hackable”? With this we’re not telling you to buy one, we only advise you to ponder over the pros and cons well, and whether you are going to risk a possible espionage by means of “Alexa, tell me a joke”.
16. Espionage? You never know who or what may have infiltrated your computer, so if you are somewhat skeptical, you can cover up your webcam and mute or unplug the microphone so that no one can see or hear you.
17. The“guardian angel”. Well, he’s not really a guardian angel, but he’s been with us everywhere for a few years. You know what we mean, right? Indeed, the mobile phone, or as it is known lately “SmartPhone”. Some think that these devices are immune to attacks… but we are sorry to tell you that they are not. Therefore, you must take the same precaution, in this case with messages and calls from strangers that seem suspicious, and of course with unofficial applications, the famous “apks”.
18. “Backups”. Hasn’t it happened to you that your hard drive (or your entire computer) broke down and you lost the photos of the summer of 2006 that you spent in San Diego that you had so much appreciation for? A quick and easy way to avoid this is to create a backup, both of the entire disk or of the photos themselves, or whatever you want to save on another disk as a precaution. Also that way you can prevent certain types of viruses that destroy everything in their way from affecting you.
19. Every precaution is little. If you want to make sure that nothing happens on a network while you are not present, you can disconnect from the network, or directly turn off your router, for example at night, thus making sure that no one can attack you and thus have a “good sleep”.
20. Browsers. The Internet is riddled with web pages that track and monitor your activity and store information about us. Therefore, it is convenient to have a browser that allows you to block or manage as much as possible both the trackers and the well-known “cookies”.
21. VPNs. If you are looking to have privacy on the Internet, you can try using a trusted VPN, which is the closest thing you are going to have to “real” privacy in the “cyber world”.
And with that we finish off our round of advice! We hope that they will be of great help to you in raising the security level of your devices, and in general, of your home network.
Would you like to find out more about what Pandora FMS can offer you? Find out by clicking here. If you have to monitor more than 100 devices you can also enjoy a Pandora FMS Enterprise 30-day FREE TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here !
Last but not least, remember that if you have a reduced number of devices to monitor, you can use Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all of our news and you like IT, releases and, of course, monitoring, we are waiting for you in our blog and in our different social media, from Linkedin to Twitter not forgetting Facebook. We even have a YouTube channel with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
In the last century we had very primitive computers and now, at the dawn of a new millennium are we the users who have become primitive !? Want to learn more? Let’s get to know User Experience Monitoring
My first computer, in 1987, was a laptop with a monochrome LCD screen and 16 kilobytes of program memory. They were 15,584 precious bytes and they were read and executed very quickly. When I started to study engineering, it was the turn for that noble artifact to perform approximate integrals and, bam! This is where user experience comes in, when the professor asked me to compare his final result with that of the computer.
Sometimes, depending on the complexity of the formula and the iterations requested, the teacher would finish before the computer. That is why I had to choose those parameters well before starting the calculation, just based on estimation. A decade later, GNU/Linux already existed, the Internet boom began (which has not stopped to this day) and we began to connect by applications that allow us to have a terminal window and thus leave workload calculation to servers dedicated to it.
The experienced user
What we were clear about was that computing power was needed. Decades had gone by where it was delegated to remote terminals and/or dumb terminals and the entire workload was done on a “supercomputer.” Sir Tim Berners-Lee created, de facto, HTML and web pages were like static board ads, changing from time to time. Something called Common Gateway Interface (CGI) was invented to allow them some dynamism. This is how we began to worry about the time it took to solve calculations and results and then present them in a web page template.
Databases evolved: I used MS Access® for small applications and for everything else dBase® and Clipper®. Then Visual Fox Pro® came, with which I was able to handle tens of millions of records on a personal computer.
It was inevitable that databases would pass by without impacting our lives. Later, in this century, PHP language was responsible not only for creating web pages, their HTML code, but we could also custom generate, in several versions, according to different parameters, connecting directly to databases and retrieving data for users in real time.
Brief retrospective
By the beginning of this century, Pandora FMS was born (in 2004, to be precise) and the checking and loading time of a web page, its HTML component, is part of what I consider primitive monitoring. It even has some advanced components, such as text search on the web page or simple login, like the POST type, to take the time it takes to return a result, among other Modules.For Pandora FMS, each measure is called a Module, which are grouped by Agents.
Meanwhile, desktop applications, now known as on premise, were also evolving. In said applications, all their binary code relies on the device where they are executed, and the data is either obtained from a local file or is connected to a database to obtain and edit information, more useful and widely used. They are also known as native applications of each operating system in particular.
Pandora FMS can do remote database checks and we can add operations that a user would generally do. For example, ask for the last seven days of sales, -if the database is online- how long it takes to return this result: if it takes X amount of seconds or more, return a warning on the screen or an alert by mail, SMS, and so on. This gives you a rough idea of the state and operation of a system, but it is not yet user experience monitoring.
Complex applications
As the computing power in servers has always been higher than in our homes or offices, the ingenuity of application programming interface, better known as API, was realized. An API is a set of functions, procedures, and subroutines that provides a “library” to be used by other software. Pandora FMS and many applications have this way of allowing third parties to develop their own interfaces to perform predefined tasks: create a new article in the database? Publish a price list? These tasks are candidates to be performed through an API.
But we are approaching user experience monitoring: if the application created by a third party goes slowly, where is the bottleneck?, in the application?, on the server?, in the communication of the server? Are there other causes for this delay?
Another detail to take into account is our human factor: I have personally had to be told that an application I have made “is going slow”. I took the source code, I changed the background color of the forms, I compiled, installed and received a variety of different responses: what got better, what got worse, etc. That is what is called qualitative reporting, but without figures or facts to support it.
Pandora FMS has real cases of experience monitoring where they reported quantitatively how and when process delays were detected. Thus we are already reaching the present, the applications that we use the most at the time of writing these lines.
Web applications
You can see how the Internet has changed the way we work to reach something that is practically ubiquitous today: web applications. Through a web browser, users are identified and everything is done online, whether the web application connects directly or, through API, to one or more databases.
They have the advantage of being able to quickly change forms for users, but it opens up other problems such as workload sharing between multiple servers and redundancy in data storage. For all this, Pandora FMS has excellent tools, and we can even add our own, that’s how flexible it is!
Said web applications can also be delegated to third parties, and if this is the case, Pandora FMS can monitor the service level agreements (Service Level Agreement or SLA): these scenarios are really complex and they may even need to include user experience monitoring.
Primitive Users
Thus, we have reached the great concern of our times: Is our computer powerful enough to run our favorite web browser? Because, actually, the vast majority only run a web browser and there they read their email, communicate through social networks, carry out their remote work during the pandemic, access their bank accounts, publish on their blog, keep spreadsheets online for different subjects…There are even dozens of tabs open, each one consuming processor and memory cycles by the web browser.
We have become rudimentary and elementary, even our web browser updates automatically. We can acquire a new computer and in a short time have everything working again as we had it since it is completely based on the web browser. I even have Mozilla Firefox and Google Chrome accounts that sync with my other devices like mobile phones and e-book readers: they offer this service to keep everything centralized.
With Pandora FMS and its Software Agent (small application installed in each device and that monitors locally) we can quickly know if these web browsers represent a very large workload for the device, as well as inventory of the software and hardware from all of them.
Have we been monitoring enough with this brief retrospective that I told you? This is where user experience monitoring comes in.
Experience monitoring
User experience monitoring is like simulating being a user who executes predefined monitoring tasks and whose results are carefully measured, saved and sent to the corresponding Pandora FMS server.
It was invented for all this that I explained you, both web applications and desktop applications.
To be honest, I’m not the first to write on this blog about user experience monitoring:
Please accept marketing cookies to watch this video.
Essentially, and in both cases, it is about moving and clicking with the mouse and/or pressing the keyboard for each of the application options to be monitored. If you want to know the details in depth, you should undoubtedly click on each of these two articles after finishing your reading here, since there is not much left to finish off.
Progressive web applications
Of course the world is constantly changing. Now web browsers, through the support of each operating system, offer progressive web applications that blur the boundaries between web applications and desktop applications.
They base their technology on HTML, CSS and JavaScript (which works as PHP but on the client side), which is no surprise to us who are used to web applications. The difference is that it uses background processes that are responsible for intercepting our requests to the domain where the web system server resides, but go further using the cache of the web browser. They do not need installation as we know it (if the user consents to its use) and can even make use of their own local databases such as SQLite, for example.
Here monitoring is somewhat complicated, since these progressive applications are capable of working offline with previously saved data: it will be a matter of programming requests with content of random values to avoid this behavior. We can also refine and target our Software Agents to refine our monitoring task. But all of that is enough material for another article.
Before finishing, remember Pandora FMS is a flexible monitoring software, capable of monitoring devices, infrastructures, applications, services and business processes.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here: https://pandorafms.com/
Also, remember that if your monitoring needs are more limited, you have Pandora FMS OpenSource version available. Learn more information here: https://pandorafms.org/
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
Do you know what Productivity porn is? We tell you everything you need to know!
“Have we gone crazy yet?”, this is a question that comes to mind very often these days. Indeed speed and excess is what characterizes the present time we live in. Constant and pressing stimuli that lead to viral videos, fake news and extravagant coach appointments that lead nowhere. Vanity and emptiness are for sale, even when it comes to productivity tips. “Do you know Productivity porn?”, this is another question, not as conventional but as relevant as the previous one, at least if you have fallen into its disturbing clutches.
To this day, we know that if you have managed to get to work, already a milestone, you will do so for an average of at least 90,000 hours in your life. That’s about ten years of consecutive work. Discouraging but true. So it is normal that, aware of the subject, you try to make the most of the time you spend at work. However, our desire for it has gone so far and in such a macabre way that, as always, it has ended up taking its toll on us. That harmful addiction to productivity, productivity, productivity and productivity at work has been renamed “Productivity porn”. A more stale and abominable “porn” than the “porn” we are used to.
Some obvious characteristics and signs of the so-called Productivity porn:
Productivity porn is often noted for its unrealistic demands,“If you want to achieve maximum productivity, get up at 5 in the morning and check out your entire mailbox, social networks and what the commercial postman brought before 6″ “Do you only have one planner? Go get several! In paperback as well as digital and online and fill them in with a succession of hours and perfectly delimited work blocks so that there is total evidence of the 15 minutes and 40 seconds of the rest for beer and tacos that you deserve once a week.”. Because Productivity porn is like that, consider that you are an indefatigable robot, perfectly designed to advance in your work and in your life or die in the exhausting attempt.
Even less realistic results. One can fan that flame: “Do not be yourself, be the best version of yourself, be the TOP of yourself, Leonardo Da Vinci and Cristiano Ronaldo, together, of yourself.”. But what Productivity porn does is try to capture and brainwash you into being a completely different person than you really are. Changing your personality so that it is replaced by a computer program, and also promises you that this transformation will take place from one day to the next. Like the diets of the telemarketing.
There is always a guru. Perhaps this is the greatest of the signs that Productivity porn presents. A god among men, who floats above them radiating a halo of light and who expresses with all his being an aura of “Admire me, I know the way (for everything) in this life”. Many times you will recognize him for appearing in the ads of your favorite videos on YouTube, others for his pedantic demagogy. In any case, his physical and psychological attractiveness is one of the greatest assets of Productivity porn.
It is very true that, as experts in the field, they will have achieved results sometimes, but it is very naive on our part, and misleading on their part, to believe that there is a definitive recipe, that if it is followed carefully, it could work miracles, to turn any of us into a profitable machine and harvester of successes, with the results that we hope to achieve at our feet in the blink of an eye.
If you’ve ever come across any of these striking features, you’ve likely been in front of the toxic tóxico Productivity porn. I’m sure right now you would know how to identify it among other realistic and evidence-based productivity strategies. Cool, it is important to be aware that applying Productivity porn can be harmful.
But why do we fall prey to Productivity porn?
If it smells rancid a mile away, why the hell are these unrealistic productivity plans so appealing to us? I already told you, Productivity porn points directly to our little heart, to that part that likes to have illusions.
And it is that positive thinking is usually synonymous with productivity, but fantasizing and constantly having our heads in the clouds, with our unlikely desires, takes us out of the most palpable and decisive reality. We plunge into a whirlwind of fantasy, based on dreams instead of facts, and we end up setting goals that, at first, can never be achieved. Bad things for true productivity.
And, surely, Productivity porn and its presumptuous and unreal routines do not help. Both planning, and reading excessively on how to plan, is an obvious sign of procrastination, also typical of Productivity porn, which takes us away from what we really should be doing: focus and work on our purposes.
We need more confidence and performance and less to have the false sense of work that planning too far in advance and in the long term gives us.
Spending the afternoon watching videos on YouTube about how this diet is going to get rid of that belly is much more comfortable than going down to the street immediately to exercise. ¡Focus!
Product tips of the day
We have already seen what it is and why we fall into this nervous breakdown that is Productivity porn, now we better see some tips about real productivity. Something that helps you move forward and focus, but not like a locomotive with an inexhaustible battery, rather like a capable and persevering being who wants to make their day to day something useful.
Accept your limitations.. The sooner you stop thinking of yourself as a Superman who endures and can do everything, the better. Consider yourself a Batman, he also has bad days and also gets tired of battling the Joker. Do not take your work home with you, do not corrupt your family life, love life, friendships or hobbies with it. You will come back with more enthusiasm if you leave your work apart from your private life.
Avoid spending the day looking for tips and secret formulas to save time in your life or in your work, and guess what, you will save precious time in your life and in your work.
If you install a new productivity strategy in your routine, give it time to work. It needs that, perseverance, diligence and discipline to master it. Do not go crazy because at the very beginning you have not achieved a world productivity record, give yourself time.
When you know that something works, keep it and do not change it, at least without prior analysis, by another type of strategy that you have been promised works better. Go at your own pace and if it works, don’t mess it up. Better productivity in hand than two in the bush.
And after all these recommendations? Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send your inquiries. The great team behind Pandora FMS will be happy to assist you! And if you want to keep up with all our news and you like IT, release and, of course, monitoring, we are waiting for you in this our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers. Ah well, we also have a new Instagram channel ! Follow our account, we still have a long way to go to match that of Billie Eilish.
How to destroy the world. Is it possible to take down the internet?
We have been warning for a long time: Pandora FMS will control the world. We have given time to world governments to prepare, to North American villagers to prepare their bunker, for sects to draw their banners with “THE END IS NEAR”. And it is, it is indeed. Today, in our blog we reveal the secret plans of this company to overthrow the institutions and rule the world, then you will say that we did not warn you. Get ready, run to hide, children and gentle pets first, because the time has come: Is it possible to take down the internet?
That is the key to everything: Is it possible to take down the internet? For years, in the underground facilities of our offices, scattered across all continents, Pandora FMS has secretly worked to create an evil robot with an evil appearance that will execute even more evil plans. Its super intelligence, unattainable for any other desktop on the market, will help us take what belongs to us from this wasteland called earth and make it ours.
That is why today, on our blog (soon the only existing one) we have the exquisite pleasure to introduce you to Pandorinator RDM (Radical Destructive Mindset), the superior and ominous AI created by our company to help us in the work of crowning ourselves as the sovereigns of the world.
“Damn! Is it possible to take down the Internet, Pandorinator?! “
Pandora FMS: Good afternoon and welcome, Pandorinator. Pandorinator RDM: Good afternoon everyone! Thank you for inviting me to this talk/colloquium at the end of the world.
Was it hard to get here with that alloy of platinum and gold that you have as armor?
Not at all. I have to get used to moving in it, otherwise one becomes paralyzed and does not come out of its hidden lair. In addition, it is a pleasure to wear it. Touch it, touch it! Don’t be shy! and watch it shine! Nor the roar of a thousand yellow suns at 12 noon radiating with their flames in summer equals it.
Let’s get to the point, Pandora FMS has always wanted to take control of the world, in fact that is why we created you, to advise us. With that said, Pandorinator, what do you recommend?
Well, a global pandemic, which is pretty trendy right now, confronting two great powers such as China and the USA, or, look, even easier, to take down the Internet.
Damn it! Is it possible to take down the Internet, Pandorinator?!
Of course it is, and I say that as an Artificial Intelligence expert on the subject of generating chaos. You only need to know inside out the critical infrastructure elements that make the Internet work.
What are these possible attack vectors?
Look, do you have a notebook there or something? Take note:
Specific services (web, mail, etc)
Through distributed service denial attacks, it is possible to “take down” services such as websites, applications and others. There are mechanisms to protect against these attacks (such as CDN) and today there are dozens of attacks of this kind daily, massive, but they are quickly mitigated and usually affect specific services (a company’s website) or the Internet as a whole. They often work like an extortion attack (either you pay or we take down your app). Thug life.
CDN
Basically they are large cache systems for publishing content, which allow Internet traffic to go smoothly. Without them, it would be much more expensive and slow to access all kinds of content, from images to text. All major media use CDNs.
The failure of a CDN can cause partial Internet blindness, cutting off access to large media simultaneously as it happened with the failure of Fastly in June 2021. There are many other CDNs and if they failed, that would mean the blackout of hundreds of thousands of websites of all kinds. The failure of a CDN only causes temporary problems (minutes/hours) in any case.
Domain Name System (DNS)
DNS is one of the most critical parts of the global Internet infrastructure. The downfall of all the world’s root DNS, as we know it, would truly spell chaos. There are 13 root (main) DNS servers spread across the world. They are hosted by organizations such as NASA, Verisign, the University of Maryland, or the US Army Research Laboratory. To sum it up… tough guys.
If the 13 nodes fail, although there are hundreds of thousands of secondary replicas around the world, it would be necessary to coordinate the recovery, which would lead to partial chaos all over the network. This has never happened precisely because of the security measures and the original design. But that’s what Pandora FMS and I are here for, right?
Cloud (Amazon, Azure)
Due to the intense concentration of many online services in public clouds such as Amazon or Azure, if one of them fails, that would mean all types of services not working anymore immediately. BOOM! Both AWS and Azure have different geographies to distribute the impact, but in the event of a physical destruction of one of their large data centers, the impact would be significant. Some premium services include automatic geographic high availability, but not all services can afford it. If the AWS data center in Ireland were destroyed by fire, tens of thousands of services would be affected for a long time.
Something similar, but on a smaller scale, happened when part of the data center of OVH, one of the largest European MSPs, got burned. Thousands of customers could not continue operating and lost data, since the backup in a different physical location was an optional service.
Connectivity
I know what you have in mind. A simple mind like yours might think that the simple cut of a submarine cable could blind an entire country, but the truth is that the Internet was originally designed to avoid such situations. The Internet has millions of interconnections that can be reordered automatically in case of failure of one of them to redirect traffic through the connections that are still operational.
Worms and Malware
A worm is a malware that is exponentially infected through the network and that can cause a collapse due to its massive use to try to replicate itself. In 1988, still at the dawn of the Internet, when technology and security were not yet very advanced, the Morris worm almost completely collapsed the Internet. Today a worm could collapse geographic sections of the Internet (such as a region) for a short time, but coordinating a massive attack is really complex to carry out without a large organization. Although, well, we could try…
It’s incredible everything you have in that quantum stubborn head we made for you, but I’m running out of pages to take note, Pandorinator RDM, could you give us any conclusions on how it is possible to bring down the Internet?
My, my, thanks for the compliment, Creator. I’ll give you your succinct conclusion: The Internet is designed for failure, so that we can lose services, but never leave the network inoperative at all. It is designed to be resilient and survive nuclear catastrophes that physically volatilize part of its infrastructure. The Internet is capable of regenerating its basic infrastructure (the routes that interconnect the nodes that make up the network) and the services that operate on them have their own ways of protecting and rebuilding themselves.
The only way we have to “turn off” the Internet is through a massive electromagnetic pulse that affects the entire planet or a massive Solar Storm. In both cases, the Internet crash would be the least of our problems.
And, listen, do you have a way to generate one of those massive electromagnetic pulses?
Me? Pay more attention! Who do you think you’re talking to? OF COURSE I HAVE! Right under this compartment, see? Even in the form of a red button.
Let’s see, let’s see…
How long will the planet as we know it last? Will Pandora FMS and Pandorinator RDM finally carry out their plans for world domination? You just have to stay tuned with our blog, our social media, and if Wi-Fi reaches you, because as the most cautious sect smokers announce: “THE END IS NEAR”.
Official comparison: N-Able vs Kaseya vs Pandora FMS
Lemons, oranges, grapefruits, limes… We know that they are not the same, but if necessary, you can make juice with all of them. And yes, we can and we will. We are in summer and it makes you want to make a good cocktail, doesn’t it? Today, in PFMS blog, we are going to analyze the commonalities of N-Able (Solarwinds MSP), Kaseya and Pandora FMS. Also their -remarkable- differences of course.
Both Kaseya and N-Able stand out for being RMM solutions and integral IT management systems in SaaS mode for MSP. In short, they are a very good solution for managing remote workstations and being able to manage and monitor them remotely. This includes tasks such as patch installation, remote software installation, network equipment configuration, remote desktop access, backups, and of course, receiving alerts when something goes wrong on managed machines.
Kaseya’s client is usually an MSP that provides services to different users, so it needs a tool that with a single license can serve different clients, managing them in an isolated, but centralized and homogeneous manner. This saves costs and is more efficient, since both Kaseya and N-Able are specific tools for Windows desktops that need to be managed remotely.
Pandora FMS client is usually an end company, or an MSP specialized in managing more complex infrastructures, which requires a tool with a more technical profile, which allows its technicians to apply their existing knowledge, scripts, etc. integrating them to compose an effective monitoring that allows them to go where other tools cannot. They are more oriented towards base infrastructure (communications, servers and applications) than to desktop computers.
In this comparison, we will also talk about prices, and both Kaseya and N-Able are above 20K USD in projects of 250 teams, yes, they are expensive tools and they also have a complex and peculiar pricing model, so much so that you will not be able to find these prices clearly on their respective websites.
A very important difference is that both Kaseya and N-Able are usually used in a cloud model (SaaS) (although they also have on-premise licensing), while Pandora FMS is a much more conservative model and is totally on-premise. This is especially relevant regarding the impact on security, since as the last hack to the Kaseya infrastructure showed us, attacking the manufacturer may imply that they can reach the end customer. As we teased long ago, Solarwinds is also not spared from this plague of security problems, and has suffered, since the first attack in 2020, several more attacks.
Given that Pandora FMS is a 100% autonomous installation (it can be installed in an environment without Internet access), and that Pandora FMS agents are not accessible from the outside nor can they be updated remotely, it is, by design, somewhat safer than Kaseya and Solarwinds. However, no one is spared, and Pandora FMS during 2020 and 2021 has published several security patches, as it can be seen in the CVE registry of Mitre.org.
As a summary, we have created a table that describes features. Below there are some additional explanations.
N-Able vs Kaseya vs Pandora FMS
Prices
Others don’t talk about prices, we do. And we do it because it is something that everyone wants and needs to see. We know that it is very difficult to compare them because no product is licensed the same and they do not even share the same concepts. What we do is propose a more or less understandable and standard project to be able to compare the costs in three years. Let’s say, for example, that you want to monitor about 250 computers distributed among virtualized servers (30), workstations (200), physical network equipment and physical servers. Making a total of 250 teams. Well, the cost of a THREE-year project, without professional services and with standard support, would be the following:
Kaseya: 30,000 USD
N-Able: 50,000 USD
Pandora FMS: 15,000 USD
Conclusions
Both N-Able and Kaseya are products that excel in desktop management capabilities: patch management, software installation, and configuration change management. They provide added value such as monitoring, backup, security policy management and remote control. To all of this, they offer a layer of additional services such as ticketing and a portal for MSPs to offer their clients an integrated management and billing platform (the latter only in the case of N-Able).
They are very oriented to job monitoring. Monitoring, although it covers many aspects, is not the main focus of the product, especially if we consider some advanced features such as:
Service-oriented monitoring (defining of service trees).
High capacity (more than 10,000 devices).
Advanced monitoring of enterprise technologies (Oracle, SAP, VMware …).
Detailed monitoring of cloud environments (AWS, Azure).
In general, both N-Able and Kaseya have monitors for all kinds of applications, but only from a very superficial and remote point of view. That is, they are limited and not easily extensible.
If we add the high costs to this, Kaseya and N-Able do not seem like a good option for server monitoring projects or core infrastructure. For that, Solarwinds has a more traditional on-premise solution, although with costs of a similar order of magnitude, while Kaseya can only offer its product in an on-premise model.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here . If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
Imagine being offered an electronic lock for your front door. One that allows you to open the door through a mobile application in the cloud, would you accept it?
They promised that they would never lose the key, that with the app your would be able to open the door remotely and even through a webcam in the peephole, the device will be able to recognize your face and welcome you.
Well, that would be making things even easier, thieves no longer would have to go door by door, breaking locks. A good thief would be enough to break the security of the company that manages the application in the cloud and resell the master key to the highest bidder on the deepweb, this includes criminal groups around the world. Days later, if not the same evening, specialized thieves will enter the houses of the selected clients, because, of course, in addition to the master key, they will have a list of clients with attributes, names and addresses. The cloud company will have to choose between crying, denying everything and declaring bankruptcy. The president of the company (CEO) will probably be the first to sell his shares in a hurry.
Weeks after the thieves almost run out of addresses on their lists, thanks to the webcam and access logs, because through those they will know that there is no one at home, the owners will arrive at their homes and when they arrive, they will not know what happened, among other things because there will not even be, a forced door.
Please don’t laugh, does it look like the script from an upcoming Netflix production? You should know that what I tell you has already happened before, including the CEO selling shares in a hurry.
It may seem like a step back, but making the decision to go back to old-fashioned IT management can be the difference between life and death for a business. Cost reduction, service outsourcing and the culture of “everything in the cloud” leads us inexorably to this phenomenon.
It happened. It’s happening. It is ransomware. It is about encrypting all the information and then blackmailing for its recovery, its decryption.
They enter your house, they take everything and if you want to see it again, you will have to pay a ransom. The information is still there, encrypted, inaccessible. Nothing works and what is worse, if you try something or you don’t pay on time, they will erase everything forever.
This time those affected are not governments or large companies. They are greengrocers, nursery schools, restaurants, dentists… hundreds of small and medium-sized businesses have had to close due to their computer systems being blocked. Again, a ransomware attack that encrypts and locks all the hard drives on your computers. Tomorrow it could be your business… or your own personal mobile. It is connected to the cloud, right?
All the victims had one thing in common: the remote access and patch management software they used at their companies. This software, Kaseya, is sold to managed service providers – outsourced IT departments – which they use then to manage their customers’ networks, usually small businesses. That software, of course, works in the cloud.
The cost of the ransom is not the most important thing, although the figures are not small (we speak of 70 million dollars for Kaseya, an average of 300 thousand USD to each individual affected).
Could it happen tomorrow again?
Absolutely, YES.
The problem is no longer the software itself. It’s not that Kaseya is a bad software or it is poorly made. Probably its level of engineering has nothing to envy to the giants of the industry like Microsoft. Everything can be improved, but that is not the issue.
As it happened with Solarwinds, a security problem led to hackers taking their malicious software inside the client, using the attacked software’s own update system to spread. Like a virus that replicates inside its victim and spreads to relatives, once inside a house, sheltered from heating and blankets. Once the attack perpetrated this way, the company in turn had problems sending the patches to its customers, that is, the patient could not get the medicine that would cure him. For some customers who never responded electronically, they had to call them to tell them the software update procedure.
The problem with Kaseya is that we are not talking about software for large companies, which requires qualified personnel for its operation, but rather a software used to provide services to small companies without technical personnel, or very few, and that cannot manage such an attack.
While Solarwinds is used by government organizations, banks, and companies on the top 500 Standards & Poors (an American financial services rating agency) list, Kaseya is used by small and medium-sized businesses around the world, and the security problem is much more massive and its impact can be even more devastating.
If the attack is directed at a company, and it is successful, it allows taking control of that company. If one service provider is attacked and the attack succeeds, all their customers’ systems can be accessed. That is why the attack on Kaseya is so serious, because Kaseya has tens of thousands of customers around the world due to its SaaS (Software as a Service) model.
Although Kaseya is a US company, affected companies have already been reported throughout Europe, the Middle East, Asia, and South America.
The attack was so successful that companies like Elliptic, which analyze cryptocurrency networks to analyze unusual traffic, are scared by the number of victims who are proceeding to pay ransoms. No doubt, if the attack was a success and made lots of profit, there will be many more.
Can it be helped?
Well, imagine that you’re invited to a barbecue in a garden. Everything is beautiful, it looks like a villa in Italian Tuscany. The temperature is perfect and the aroma of the food is delicious. The wine, the company, everything is fantastic.
There is only one problem, mosquitoes are going to devour you. When you go back home, you will not be able to sleep, you will end up full of bites and will wonder how it is possible.
Something similar happens with Kaseya and Solarwinds. They are fantastic, but, do you see yourself all your life assuming the inconvenience of eating in the countryside? It is not about putting on pants or applying insect repellent. There are wasps, ants, all kinds of bugs in the countryside, attracted by people and the smell of food.
A party in your home kitchen may be less glamorous, but if you just want to eat well and not watch out for mosquito bites, you know the smart thing to do. It will be more inconvenient, even more expensive, but it controls the environment.
The same goes for applications based on the cloud or based on the SaaS model. They have many advantages, but security is not one of them, because you delegate it to organizations that you do not know.
If you rely on IT for your business continuity, you may need to step back and go back to more conservative models. After all, trends go by and the world keeps on running.
With Pandora FMS SaaS monitoring you can start operating almost instantly, using our Enterprise technology, thanks to any of our certified partners who offer this service. This allows you to focus exclusively on the operational aspects, control costs and growth from the very first minute, without having to invest in training, licenses, management, updates, initial implementation, etc.
Let’s cut to the chase. The standard SaaS would be closer to the guy who´s made his bed so he can lie in it, simply by hitting the buy or try button in the provider of his choice within Pandora FMS certified ones. Perfect to start a project or to operate a medium or small sized environment.
However, our new SaaS Plus model goes much further. It is suitable for that other type of professionals, who need to scale from an environment with hundreds of agents to several thousands. Due to limitations beyond their control, it cannot operate on its own, in a more traditional (on-premise) model. In addition, these professionals are looking for help in the early stages, with consulting and integration. Just what we do best: helping. You will also have custom support with the sale, which is done individually, guided by a sales engineer, with a specialized technical profile and the support of pre-sales and support engineers.
Okay, but then what makes SaaS plus different from Pandora FMS?
100% system control (hardware, OS and Pandora FMS).
Hour packs for custom adjustments.
Monitoring operation./li>
Agent installation services and remote deployments.
Installation and dedicated staff.
24/7 NOC (Network Operation Center) operation anywhere in the world with customized service level conditions.
SaaS Plus services can be run in any country in the world, relying on local partners to integrate the best of partner proximity and experience and vendor expertise and support.
Would you like to learn more about this new Pandora FMS release? You can download all the information right here
If you have to monitor more than 100 devices, you may also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Cloud installation or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
Pandora FMS organizes its official annual barbecue, summer 2021
Christmas 2019, bubbly employees of Pandora FMS meet, without their knowledge, for their last group meeting outside working hours. Games and lots of laughing, fooling around, party horns, appetizers topped with prawns and champagne. “What do the Christmas baskets have this year?” “Pass me the goose fuagrás to spread on top of my little toast with extreme delicacy before it gets soft.” Party drags on into the night in a murky local of shishas. People in ecstasy embrace one another and we love, celebrate and have fun, promising, at the end of the night, to see each other again in a short period of time. Oh, what will be our surprise when a world scale cataclysm explodes and prevents all real contact until the very and already sacred company barbecue of this summer 2021..
Because it’s been like that, many of our most intimate company buddies have not seen each other, breathed, or touched each other since that last Christmas meeting of 2019. How sad. They missed each other with such force and profusion that this new reunion has only been able to be resolved and collapsed in a celebration, such as the hippy Woodstock Festival of ’69 or the arrival of Pope John Paul II in Brazil, with the Mass in the stadium of Sao Paulo and everything.
But the event didn’t just consist of a series of friendly reunions in slow motion while smelling the bacon being roasted to perfection in the background, just between crunchy and tender, no!, it was much more, and I will take advantage of the fact that we live in the golden age of digital photography to illustrate it.
Pandora FMS – Sacred Company Barbecue, 2021
To start off, we must first talk about the location. From the beginning different options were considered, the beach of Leandro in Fuerteventura, for example, although the procedures to make that hidden place an Ibizan enclave were going to be hard to manage, or the midfield of the Wanda Metropolitano, stadium of our beloved Atlético de Madrid. But that didn’t turn out well either. On the same day, the only date in Spain, Taylor Swift played. So we decided on something more cute and Sundayish: a campsite, full of friendly tourists bathing in the sun and chubby children, who, given their parents getting their fill of sangria, ran near the edge of the pool without anyone stopping them. Caravans, robust prefabricated huts, public toilets, elongated porches, the possibility of jumping on a trampoline right after having lunch and under the heat of three in the afternoon… In short, a paradise that can only be described by Virgilio or by me. Camping Alpha, it was called. We had a great time and from here we recommend you to visit it.
The first to arrive were in charge of establishing the foundations of the organization of a good barbecue: cool the beers, open the bags of crisps and hot dogs, and pray that they do not have the only important mission: to be the official chef of the evening. It is a position that is applauded and praised but hated. Nobody wants that kind of work. And less in June at thirty degrees. Staying safe from the sun and beer nearby, that’s what everyone longs for. However…
However, there were heroes. Heroes named Quique Condés, Channel Manager in the company, and Marcos Serrano, Graphic Designer, who risked their physical integrity by approaching the fire at filthy temperatures, to sweat and present us at the table the greatest delicacies that can be offered by bacon, sausages and ribs. Poets of the forge, smoked by the black seed of coal, effects of Fata Morgana reverberating in her armpits, and… yet always firm and ready to smile. Few beers were offered in appreciation.
Here is a sample of their haute cuisine. We will upload the video to our channel with the recipe. But take note, you need bacon and salt, peasant style bread, if you like to eat with two loafs per serving of meat.
Here we already have a sample of Renaissance art. The Last Supper if Yesuscraist had taken his Jewish epigones to a campsite. The important thing is to eat in a hurry and eagerly. You know your companions. If you are not fast enough, they will leave you without the long-awaited seasoned ribs. So a bite of sausage, a few crisps of these birthday bags, bread, do not forget, chew twice and swallow it all with cold beer. There is no place to choke. It is time you waste without eating.
Here you can see grazing, in their natural habitat, that small subgroup which are the interns. If you get too close, they will scatter, like a herd of wild bamboos still close to puberty. But if you however think you have mastered their language and say “cool” and “awesome” things without squeaking like the mature oldie you are, you can try socializing with them. But don’t trust yourself too much, take care not to end up telling one of your battles, otherwise they will leave bored one by one, without you noticing and they will regroup away from you.
However, sometimes there may be a respectful relationship between intern and tutor. A bond as strong and knotted as Emma’s lacquered pigtails in the Spice Girls. Based on complaints from the intern and sealed by the corresponding reprimands by his superior authority. I let you guess, dear readers, who of the two is the intern and who is the tutor.
Many may think that it is a by-product, but here is a clear, faithful and quick example of the smile and the festive blush that a good bottled sangria can draw.
As every year, at each official barbecue, the famous Artica Awards were held. So don’t panic. These two burning chives are our presenters. On the left, with stylized leopard shorts, the Ana Obregón of yoga and reception, Carmen Rodríguez, and on the right, with a visible joy produced by the twelve beers he already had, Dimas Pardo. Both were in charge of guiding the gala with the elegance and the know-how that characterizes them. They were the perfect coupe. They fit better than two Lego pieces fresh out of the box, the Iron Man armor in the first movie, the whipped cream and the Kama-sutra with your last girlfriend.
Here are the categories and winners of this year:
SEXIEST person: .Kornelia Konstantinova.
DRUNKEST person: Alberto Sanchez AKA Alsanba.
FUNNIEST person: Marcos Alconada.
Most ABSENT-MINDED person: Raúl Martín
KINDEST person: Javier Mannuzza.
BUBBLIEST person: Technical tie between Carmen Rodriguez and Alexander Rodriguez. *They are not brothers.
CLUMSIEST person: Lidia (intern) *She does not have last names because she is an intern.
SMARTER person: Rafael Ameijeiras
Most CREATIVE person: Javier Mannuzza
Most MYSTICAL person: Elias Veuthey
Here we have Marcos Alconada showing off his exquisite award: a chicken hat with a light feather and scarlet caruncle. We know, for good measure, that there will be no Tinder date or Nephew’s Communion in which he does not take it upset and cackling.
Another of the winners, Elias Veuthey. Although he won in the category of mysticism, for his knowledge in this science and the enigmatic aura that he gives off, he was also nominated, as is evident, in the category of sexiest. There were fights in the pool to sign foreheads, thighs, arms and backs. We ran out of permanent markers. Damn groupies!
We finish the photographic tour with the winner in the sexiest category. She is Kornelia Konstantinova and right now Elle and Vogue magazines are crazy over her for their covers, and, well, Marvel Studios has already offered money to speed up Scarlett Johansson replacement as Black Widow in possible sequels to the franchise.
And with this we close this extensive slide show. We hope we made you jealous enough, dear readers, so that you will sign up next year, whether it be to the pool, as volunteers to cook or in the voting of the award nominations. Meanwhile, if you miss us, we wait for you in our blog and in our different social networks, from Linkedin to Twitter not forgetting Facebook. We even have a YouTube channel , and with the best storytellers. Ah, we also have a new Instagram channel! Follow our account, we still have a long way to go to match that of Billie Eilish.
We travel back in time in search of the first digital transformation
“-Jimmy! Define Digital Transformation!
-I haven’t studied it…
-There are no excuses, it is a very intuitive and well-known concept, even for an elementary school student.
-Mmm…
-Come on, Jimmy! Or I’ll give you an F that will give you blisters!
”
It was right then that Jimmy rose like a spring, and with his mind blank and his gaze clouded, he snapped a sonorous and mechanical sound to the horizon:
“Digital transformation is that change or advance relative to any application of new digital technologies in all aspects and aspects of human society.”.
“-BRAVO, JIMMY! BRAVO!”, applauded the whole class.
That day they carried Jimmy out of the building on their shoulders and immediately instituted summer vacations for the entire school, in the middle of October. From here we can only say: Thanks, Jimmy. We will use your neat and undeniable definition to trace today, on Pandora FMS blog, a journey through time in search of the first notions about digital transformation and its consequent repercussions. So join us, if you like, on our tuned, hybrid, and full of diesel DeLorean, to make an absolute reference to Back to the Future.
Digital Transformation in 2011, 2013 and 2015
We have already burned wheels in two parallel lines of fire with our DeLorean and we have reached 2015. Do you remember? Jorge Lorenzo won his third MotoGP World Championship and Juan Goytisolo received the Cervantes Award. That same year the research center MIT Center for Digital Business and the private firm Deloitte declared: “mature digital businesses are focused on the integration of digital technologies, such as social, mobile, analytics and cloud, at the service of the transformation of how business is done. In contrast, less mature businesses are focused on solving discrete business problems with individual digital technologies.” Is it clear enough? If you are not applying digital transformation, your chances of being left behind are high then.
In 2013, the Year of Faith according to the Catholic Church and the year of Luigi according to Nintendo. Not that long ago, not even a line on our DeLorean’s tank marker, we found a very uneven analog-digital conversion, according to Booz & Company, the global strategy consulting team. We are talking about sectors and countries lagging behind in converting from analog to digital. I am sure that if you look back, you will remember the uncertainty and slowness of analog technology. Politicians and strategists at the helm around the world had to step up the development ladder in this paradigm shift. The economy depended on it!
In 2011, with the death of Steve Jobs and the beatification of Pope John Paul II, we find that only a third of the companies around the world have a particular program of truly efficient digital transformation. Sad, yes, but as we travel backwards we will feel this crudeness more strongly.
Digital Transformation in 2000
We refuel our DeLorean in 2000, big milestones of the year? I got Pokemon Gold with Typhlosion at level 91. At that time, digital transformation was a fact very much in mind and in which they were already working, but the arrival of the three Ws (World Wide Web) changed, profusely, the speed and scope that digitization would show. There was increased pressure from societies to pass this process.
Digitization had become a concept/argument that was used at all times. And of course, it had to do with the increased use of the Internet and IT at all scales. This climate, already so common in companies, made us all aware of the issue and even the EU, for example, created the Digital Single Market. From this place arose many of the ideas with which the political agendas of the different countries of the Union were nurtured. The transformation of their different societies began gradually.
Digital Transformation much further back in time
I know you didn’t expect our DeLorean to be past eighty. After all, many believe that from there, apart from the unquestionable Back to the Future franchise, comes all the magic of digitization. However, it is time to accelerate. The Flux Condenser will fume but it will be worth it. If we get stuck in the past, with no possibility of returning, we will learn its customs and form a new family while we make ends meet by investing in aspirin or the gramophone.
In 1703, the King of Portugal, Pedro II, declared himself opposed to the cause of Philip of Anjou and Tsar Peter the Great founded the city of Saint Petersburg. However, the digital transformation has to give thanks at that time to Gottfried Wilhelm von Leibniz, who, attentive, gave birth to the concept of digitization in one of his most transcendental publications: “Explication de l’Arithmétique Binaire”. Years later, 1854, 1938, approx, geniuses as renowned as George Boole and Claude Elwood Shannon complemented and developed it.
In 1939, World War II begins and Gaby, Fofó and Miliki decide to form a comic trio of clowns. But we also have George Stibitz, known in the trade for his work on the development of digital logic circuits and, nothing more and nothing less, than for laying the foundations of the first digital computer. In addition to popularizing the term “digital”, very important for this article.
In 1961, Yuri Gagarin becomes the first human being to travel to outer space and Roy Orbison releases his debut album, “Roy Orbison at the Rock House”. But who interests us is Leonard Kleinrock, the American, engineer and science teacher who conceives the Internet in his work “Information flow in large communication networks”. To this day (the day the article is published), this man is still alive. Better go pay tribute to the door of his house. He resides between New York and Los Angeles and likes camellias.
In 1969, the arrival of Apollo 11 to the Moon and the Beatles’ last public performance. The ARPANET network was also created, commissioned by the US Department of Defense, and which is basically the seed of what we now know as the Internet.
Now that we have returned, unscathed, from our journey in search of the past milestones and nuances of the “Digital Transformation” concept, and now that the DeLorean is parked, until the next adventure (in which we will undoubtedly go see a Tyrannosaurus Rex or a Queen concert), we can resolve that the digital transformation has led to important changes within business models, social and economic structures, political and legal decisions, culture and other organizational patterns that guide us in the present. The concept went from a small and private sector to reach the hands of a huge public, always eager to master new technologies. The question is: In this new kingdom, as we have seen, new and old at the same time, what is your place?
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here .
If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
And if you want to keep up with all our news and you like IT, releases and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers.
Find out what VMWare is and how to include it in monitoring
Before we dive into how to monitor virtualized environments with VMWare, let’s clarify a couple of concepts for those who are less into the subject, starting withWhat is VMWare?.
VMWare is a software product development company, mostly related to virtualization, and more recently to containerization, although this is beyond the scope of this article. Today, we are going to focus on monitoring virtualized environments with VMWare.
To do this, the first step would be to know what virtualization is. A quick summary, and a bit imprecise I must say, but that will give you a general idea. We can claim that virtualization is like dividing the components (CPU, Memory, Disk, etc.) of a physical computer or server (which we will refer to from now on as Bare-Metal) into virtual or emulated components. This will allow us to share the same component between different instances which we will call “virtual machines.” That way, using a single set of hardware, you may have different virtual machines running different versions of operating systems, applications, libraries, etc. simultaneously and separate from each other.
The interesting thing about this is that, for the virtual machine (which we will refer to as VM from now on), the resources that have been assigned to it are only from it and are real elements. This opens a world of possibilities, it allows you to have many services and virtual machines running on a single hardware device with the energy, space and cost savings that this implies. In addition, since it is all at software level, it will allow you to manage the machine as one more file inside the computer, being able to copy it, modify it or even package and distribute it.
The advantages of virtualization are more than proven and today almost any service and infrastructure runs mostly on virtual servers. A very clear example is when you go to your favorite cloud provider and click a button to activate an instance of a database or a server, actually what you’re doing is activating a virtual machine that it already had pre-configured and that can work for you in a matter of seconds, thanks to this technology.
Due to these types of advantages, and because of the massive distribution of virtual machines in most ecosystems, it is so important to have a monitoring tool capable of adapting to this type of environment efficiently.
Now that we know what virtualization is, we will see a new concept. We already said that from a physical machine we can emulate and subdivide its components to create instances of smaller virtual machines, and it is true, although there is a small nuance, we require software devoted to this, we call this software hypervisor.
There are different types, manufacturers and features we are not going to delve into today. If you are interested in this topic and want us to do a more detailed article on virtualization leave it in the comments
vSphere
Today we will focus on one of VMWare’s most widespread and well-known products: the vSphere suite which, according to Wikipedia, “is VMware’s core business suite, the cornerstone on which almost all the business products they offer rely on. It consists of the ESXi virtualization software that is installed directly on the servers and the centralized management console vCenter.2020 ”
As we have seen, vSphere is the name of the set of tools that VMWare offers for device virtualization, there is a range of different vSphere environments, from a single ESXi server that works as a hypervisor as well as management.
To much more complex environments where several ESXi work in parallel being managed by a centralized administration software called vCenter.
Virtual environment monitoring
To monitor virtual environments, whether it is from VMWare or not, there are two main ways.
The first is to treat each virtual machine as an independent machine, attacking its operating system with standard protocols or using some monitoring agent.
This approach does not require for the tool have a special or devoted management, since it will deal with each VM as any other machine. Along this approach, we can say that we will interrogate the operating system, therefore, in heterogeneous environments, we must define metric captures for each system.
The second way is more general and allows deploying monitoring very quickly and efficiently. In this case we will integrate the hypervisor, since it has information on all the machines it contains and we can interrogate it directly. For each manufacturer the protocol, the responses and the format with which we will interrogate the hypervisor may vary, but in most cases they have an interface to communicate with it. Along this approach, it is the monitoring tool that must be adapted and have a connector to communicate with the supervisor in a centralized way.
Of course, Pandora FMS has both types of monitoring, being able to combine them if necessary if deep and detailed monitoring is required.
In today’s case, we will see the monitoring integrated in Pandora FMS Enterprise Discovery tool. That will allow us, in a very simple way, to connect well, either with a standalone ESX or with a vcenter, through the vmware SDK.
vSphere Monitoring with Pandora FMS
Starting from the fact that we have a Pandora FMS Enterprise instance, the steps are very simple: by default Pandora FMS has the necessary libraries to connect to a VMWare environment, you only need a user account with reading permissions and connectivity with the ESX or vCenter as the case may be.
Once you fill in the simple form with the data from our VMWare environment:
You will see a window to configure some monitoring data, such as the scan interval for new machines, the execution threads that you will devote to this task, if you want to activate network monitoring and (only for vcenter), if you want to capture the environment events.
Once finished, you will be able to see that a task has been added to Pandora FMS task list, where you will be able to see its last execution, enable it, disable it or force task execution manually.
The default task will give you information about all the ESXs (in the case of vcenter), virtual machines and datastores available in the vmware environment that you configured, returning the following metrics:
Default monitoring for Datacenter:
Ping
Check 443 port
Default monitoring for Datastore:
Capacity
Free Space
Disk Overallocation
Free Space Bytes
Default monitoring for ESXi:
CPU Usage
Memory Usage
Received data
Transmitted data
Disk Read Latency
Disk Write Latency
Host Alive
Disk Rate
Net Usage
Default monitoring for virtual machines:
CPU Usage
Memory Usage
Tools Running Status
Host Alive
Disk Free
Disk Read Latency
Disk Write Latency
Received data
Transmitted data
Net Usage
In addition to the metrics described, you will also have a specific view for monitoring vSphere environments that has compilation information on the general state of the environment and each monitored ESX and even a map of the monitored infrastructure.
As you can see, it is very easy to start monitoring a vSphere environment with Pandora FMS, just follow a few steps and you will have your VMWare monitoring integrated quickly and easily.
If you are interested in knowing in more detail how synthetic transactions are configured and executed with Pandora FMS, do not hesitate to visit our YouTube channel, where you may find different contents such as tutorials, workshops and a lot of other resources devoted to this and many other topics related to monitoring.
The battle begins again: Pandora FMS Vs Nagios. ¡FIGHT!
NagiosXI is the proprietary heir of one of the best-known tools in IT to monitor systems without a license, that is, as a free product. As a free product, Nagios (without XI) is a product that is almost 20 years old and suffers from many shortcomings, but for many years it has been the standard among “free” products and it fulfilled its role in those cases where the budget was quite short or the features needed were just a few. In recent years, its role as a free tool has been replaced by the more modern Zabbix.
Product features
Nagios XI is not a product as such, but rather it combines several pre-existing independent components. The best example is, for example, Nagios XI WEB management interface, with several elements, each one with its own credential system. Other system components installed on the Nagios XI appliance include:
Nagios XI UI: “Overlying” interface on the “basic” Nagios interface.
Nagios Core: Traditional interface.
NSCA: Agent for passive and plugin tests (not maintained since 2011).
NSPA: Agent for passive and plugin tests, with remote management.
NRPE: Agent for running Nagios plugins.
NRDP: Agent, theoretically a replacement for NSCA, whose development has not been updated since 2012.
Nagios Plugins: Monitoring scripts. There have been several community “forks”.
NagiosFusion: System similar to Pandora FMS Metaconsole.
Netflow Analyzer: Specific component to work with Netflow/SFlow flows.
Nagios Log Server: Log storage and monitoring system.
Each component with WEB interface has its own “look & feel”, its own user management system and, of course, its own configuration and integration with other elements. And these are elements designed by the company itself, Nagios Enterprise.
Third-party “OpenSource” components
PNP: Plugin to monitor performance using RRD binary databases.
Nagvis (maps): User-defined maps.
NDOUtils: Information export from nagios to SQL.
NSClient ++: Alternative agent that supports Nagios/Icinga.
NagiosQL (modified): Administration interface with data storage in MySQL.
None of these elements, which make up the “Nagios XI” solution, are even by Nagios itself, so the compatibility and coherence between them is relative. In many cases, no one can guarantee the quality or maintainability of those pieces of software.
Feature table comparison between Pandora FMS and Nagios
General features
Nagios
Pandora
User Experience monitoring
NO
YES
Availability monitoring
YES
YES
Performance monitoring
Partial
YES
Event management
NO
YES
Event correlation system
NO
YES
Multitenant
NO
YES
Log collection
YES
YES
Centralized management using monitoring policies
YES
YES
Certified Security Updates
YES
YES
Geolocation
NO
YES
Command line management
NO
YES
LDAP/AD authentication
YES
YES
Virtualization and cloud computing
YES
YES
High availability
YES
YES
Horizontal scalability (Metaconsole)
YES
YES
Service monitoring (BAM)
NO
YES
Customizable visual console
YES
YES
Synthetic modules (dynamic creation of data on existing data)
NO
YES
Historical database for long-term data storage
NO
YES
Centralized plugin distribution
YES
YES
z/OS monitoring
NO
YES
SAP R3 & S4 monitoring
NO
YES
Remote control (eHorus)
NO
YES
Agent technology
Nagios
Pandora
Multiplatform agents for Windows, HP-UX, Solaris, BSD, AIX and Linux
YES
YES
Remote management of software agent configuration (with policies and manually)
Fine-grain ACL system. 100% multitenant ready for SaaS
NO
YES
SLA advanced reports (daily, weekly, monthly)
NO
YES
Dashboard
YES
YES
Planned stops and exclusion
NO
YES
Report templates
NO
YES
Network features
Nagios
Pandora
Network L2 topology detection and self-discovery
NO
YES
IPAM (IP Address Management)
NO
YES
Decentralized SNMP and WMI monitoring (proxy servers, satellite servers)
NO
YES
SNMP trap monitoring
YES
YES
Dynamic network navigable maps, modifiable by the user in a graphical environment (Network console)
NO
YES
High-speed ICMP and SNMP scanning
NO
YES
Netflow
YES
YES
SSH/Telnet Console
YES
YES
Points against Nagios
Monitoring current technologies
New check creation is based on wizards or plugins. In both cases, you have to be an expert to modify any of them (you have to program at command level, know the specific template definition language and manually debug), which makes it difficult to broaden the variety of checkups or customize one of them easily from the interface itself. In Pandora FMS, any extension can be carried out using the WEB interface, without getting down to the console level, in addition to offering a bigger plugin collection for business software that does not require any kind of coding.
When applying settings, you need to “compile” them so that if something goes wrong, changes cannot be applied until they are corrected. This would be insane in an environment with many hosts. Deleting an agent without first deleting the service it contains prevents you from making the change, but it does not solve it, for example. In Pandora FMS, the entire operation is in real time, or in the case of applying major changes, managed in the background by the system, without interruptions or the need to interact at a low level with the system.
Management automation
In general, monitoring is so manual that it would take a long time to monitor 100 agents, unless low-level scripts are created to automate the whole process, so there is no standard nor tools that allow automation, or good practices, it depends exclusively on the ability of the “nagios expert” to automate these tasks efficiently, which is a completely manual process.
Reports
Although Nagios has “custom” reports, this customization is limited to parameterizing the already available reports, which are only of 20 types. Each report shows a type of information available with a pre-set presentation, for example the SLA:
Filters can be added and saved as favorites, but it is not a report that can be much more customized. To sum up, reports are intended for the technician’s use, never to be used for an internal or external client. Reports do not allow to combine different types of elements or to show generic graphs of specific metrics.
Usability in large environments
Console load for very few agents is extremely high. The usability with a high number of systems is very poor. Although it can be made to monitor many systems, it clearly has not been designed for it. Pandora FMS is currently being used to operate and manage systems with more than 100,000 nodes.
Windows Agents
Nagios “Advanced” Agents for Windows (NSCA) are from 2011 and there have been no updates since then. There are several “Forks” (iCinga, ISCA-NG), but not for Windows. Despite the fact that Nagios has up to four types of agents (NRPE, NSCA), their performance and power is far from being comparable to that of Pandora FMS, especially in Windows environments.
Performance monitoring
Until very recently, Nagios used third-party software to manage performance data and graphics. It has now been integrated, but it remains a tailored third-party component, and not part of its initial architecture. Pandora FMS is a native capacity tool, it can be used to elaborate dashboards, since it works with data and an SQL engine from its first version.
Lack of event management
Nagios does not perform event-based management, it cannot automatically validate events from monitors that have been retrieved, it cannot group them, and it cannot specify event-based alerts. To tell you the truth, there is no “event” concept in Nagios as in other tools (OpenView, Tivoli, Patrol, SCOM, Spectrum, etc). Pandora FMS has evolved based on requirements of former users of these tools, so the level of compliance with industry standards is very high.
For Nagios, the events consist of a text log for a simple visual review, as seen in the following screenshot:
Nagios cannot do a root cause analysis,
Since there is no event correlation. PandoraFMS does have it, and it also has multiple tools (L2 Maps, Services, Alert Escalation, Cascade Protection) that help the user in this regard.
Nagios cannot do BPM (service monitoring)
With Nagios you cannot set a hierarchy based on weights of different elements from different systems. Pandora FMS has a specific component (Service Maps) for this specific point.
Network level deficiencies.
Nagios cannot display a physical network, since it is not capable of detecting or displaying link-level topologies. This limits switch and router monitoring. Furthermore, its network maps are not interactive nor can they be edited or customized unlike Pandora FMS Enterprise.
Its SNMP trap monitoring is not integrated with monitoring and therefore no added graphs, reports or alerts can be displayed. The same applies for its Netflow monitoring interface, which is conceived as an auxiliary tool.
Dashboard and custom visual displays
The closest thing to Pandora FMS visual consoles on Nagios are the third-party NAGVIS plugin that has barely evolved in the last 15 years. Nagvis is an external plugin, which is not even fully integrated with Nagios XI. Even going so far as to have a different look & feel:
Although Nagios also has a Dashboard with a concept similar to that of Pandora FMS, it does not have basic elements, such as showing graphs of each monitored element, or numerical data of the collected values. It happens in a similar way with reports, which have “predefined” elements that provide little or no flexibility when it comes to building your own dashboards.
Permission management and Multi-tenancy
NagiosXI is not intended to work in a complex organization, where different administrators and users with access to different groups of machines can coexist. Its access segregation is very basic:
The scenario where you may have several dozen, with different ACL permissions by user groups is not even contemplated. Although it has an audit log, it is not useful to know what the administrator or users do with the tool, it is more like a server diagnostic tool.
Conclusions
Nagios is a software tool that can be useful in environments where there is already a person with advanced knowledge of Nagios who takes care of everything and adapts it manually according to the needs of the environment. The company does not have a “Nagios”, it has a “person who knows about Nagios”, so the cost of the total solution is really the cost of that person, including a possible replacement. In this case you don’t pay for license nor maintenance, but the hidden costs are of other nature. Tool customization and evolution depends entirely on that person. It is not a standard solution, it is a completely “ad-hoc” solution.
100% of our clients, prospects or consulted companies that use Nagios, actually use the “free” version of Nagios, which has less features than those included in this comparison. There are many Nagios forks, the most popular are Icinga or Centreon. There are commercial alternatives with a higher quality than that of Nagios XI, the best representative would be OP5.
Nagios XI is a tool whose main strength is its license price, which in most cases is free, and which even in the case of paying for the “Enterprise” version is more competitive than Solarwinds or Whatsup Gold just to name a few.
Pandora FMS is a tool that competes – and has already replaced in several cases – tools from IBM, HP, CA and BMC such as Tivoli, OpenView, Spectrum and Patrol. The scope, resources and scope of the projects are clearly different.
Ártica Pandora PFMS launches a new channel program for partners as a key element in its business development strategy.
Ártica Pandora PFMS, within the framework of its global growth strategy, is evolving its channel program with the firm intention of expanding its worldwide network of partners. We will do so together with companies that complement, with their knowledge of the clients’ business, the wide range of monitoring, incident management and remote management services provided byPandora FMS, Pandora FMS Remote Control and Pandora FMS ITSM products.
At Pandora FMS we understand the importance of the benefits provided by a quality service, and, therefore, we want to develop the potential of professional IT companies whose purpose is to improve their clients’ business through knowledge and proper IT infrastructure use.
Since our main objective is the service quality for the users of our solutions, we especially focus on the qualification of our partners. We know for a fact that deep knowledge of our tools increases their productivity, while reducing the time spent by technicians. Effectiveness and efficiency that are achieved through custom training, and that can reproduce the customer environment so that partner-customer integration is fast and efficient.
The new Pandora FMS channel program embraces any size of company, from service providers to MSPs, consultancies, system integrators, distributors, etc. Of course, as long as they understand the value that monitoring and knowledge of the status of the IT infrastructure provides in the positive evolution of their business.
It is a simple program, easy to understand and comply with, without any tricks or hidden conditions. Flexible, with the ability to adapt to the needs of partners and their customers. Consistent, based on many years of experience attending and understanding the particularities of the channel and always aimed at providing the maximum benefit, direct and indirect, to companies that place their trust in us.
Now Pandora FMS partners self-qualify based on their commitment on three levels: Silver, Gold and Platinum. We certify that any of the levels is perfectly qualified to represent, with guarantees, Ártica PFMS products before clients. Our channel program also contemplates complementing the small deficiencies that may arise with our own manufacturer services, of the highest level.
All of our partners will have commercial interlocutors who, listening to customer requirements, understand their needs and are able to propose the appropriate solution. Gold partners will also have a qualified and certified technical team to install and adapt our solution to the client. And, of course, Platinum partners will enjoy higher independence and a higher volume of commercial and technical resources, which will allow them much more agile response times.
For Pandora FMS, the word “partner” means commitment, so the entire company has acquired the responsibility of helping to develop the channel’s business. From our first resource to the last (technical, presale, commercial, marketing, administration…) we are all available to our partners to minimize their own needs and maximize their business generation.
Each of our collaborating partners has their idiosyncrasies and their catalog of solutions, and the success of our channel program lies in the way we adapt Pandora FMS products to said portfolio, seamlessly, so that the organization of our partners, as a whole perceives that your solutions are scaled up without the need for patches or technical or commercial efforts.
We share the path, we work on demand generation, either directly through events and campaigns for predefined clients, or indirectly through social networks, generalist and economic press, press specialized in information technology or presence in sector fairs. We actively collaborate also providing all kinds of commercial information on the product portal.
Once the need is created, we reinforce, with our presence, the work of the salespeople, both in the initial stages of validation of the opportunity, and in the realization of presentations and custom demonstrations to clients, including, depending on the demand, tests of concept or even pilots with real data. We always leave the relational initiative to our partners, to whom in no case do we discuss the ownership of the opportunity, we only stay by their side in the sales cycle, thus guaranteeing avoiding conflicts between partners that may cause image and productivity loss in end customers.
Once the agreement with the clients has been reached, we continue to be by the side of our collaborators, providing them with all those services they need to complement their training and guaranteeing the success in the project’s execution.
And we don’t stop there, because we know that the relationship with a client does not end with an installation, but that it is something alive, constantly evolving, like our products, which include improvements (releases) from three to five weeks. Our direct support, or through the partner, is able to cover, with continuous coverage, the demand of the end companies that trust us.
In short, at Pandora FMS, we take our business very seriously, as much as any company with which we have the pleasure of collaborating, and, therefore, we have chosen a simple, flexible and seasoned channel model, which allows for the money generated by the ecosystem of partners that we are creating to affect and feed it. So the services we offer, together, obtain a productive knowledge of everyone’s information technology infrastructure, and this helps businesses grow in a sustainable way with the minimum operating cost.
If you are already a Pandora FMS partner, ask us more about how to grow together and, if you are not yet, find out all the details of the new partner program and contact us. We are sure to find quick business synergies.
Every change is for the better and we give you ten reasons to change your monitoring software.
Cuando se habla de cambiar de software, no sé por qué, me viene a la mente la compra de música. Bueno, yo soy de los de antes: vinilos, cassettes, a principios de siglo los CD y DVD… Claro, ahora es diferente, actualmente existe el pago por suscripción, que reproduce en línea, y donde generalmente se ofrece el álbum de turno o paquetes completos con muchas estrellas musicales…
We could start right there, highlighting the difference between “the cloud and the earth”, running software on the Internet versus having one on your own physical servers. Both have their costs, we know. In fact, we already gave detailed information in another article on the subject. Because before talking about changing your monitoring software we must start there, the money. That’s the reason why you will have to take into account several factors, so let’s go for pencil and paper (virtual) and let’s start numbering!
1) Pandora FMS offers several forms of installation and download, as well as modes of operation. That is one reason to consider switching monitoring software. This mechanism allows you to grow, and, if necessary, reinstall at any time. You don’t have to buy a whole package either: in Pandora FMS you start by installing the Community version and as you see the benefits for yourself, you can move on to installing and testing the Enterprise version, without obligations or hassle. There you will always have the installers, both online and offline, as many times as you need them.
2)Do you have a feature in mind that cannot be found in any monitoring software? Don’t be embarrassed, it happens. I, at the very least, am very picky about how to insert text and data into text or number boxes. When you focus on them, I like for the text to be selected in a specific color, for example. And don’t even let me begin on entering numerical amounts or phone numbers.
And Pandora FMS does not have exactly that requirement either… However, you just have to go through the Community version that is open source and through its forum to get the help you need to develop the idea.
Better yet, you may have already been successful but now you want a more ambitious and highly customized improvement for your company: try the Enterprise version, where they will give you professional advice and offer you extraordinary improvement plans tailored to your needs. After all, only you know what is best for your company and what it needs. An exactly tailored suit or smehting ready-to-wear ? You choose!
3) With Pandora FMS you will be able to monitor at first remotely, without interfering much in your work processes, continue with an advanced remote configuration and, if everything goes smoothly, advance to monitoring with Software Agents, which are installed on each device. While you change -and advance- Pandora FMS has already outlined the path until (for now) June 2023. Exploring and changing monitoring software can be done before it’s necessary, even if it’s late.
4) Using great monitoring software, widely used worldwide and also used by large corporations, is not a guarantee of good security. I invite you to read about the case that took many headlines in the press, social networks, radio and television. Take this chance to have a coffee and take a deep breath to come back, there are still six reasons to change your monitoring software.
5) Because you don’t believe inmagic wands. Neither do I, and in Pandora FMS that is very clear for them. Each client has a different problem and it is necessary to adapt to each particular case. But it will not be by magic, you have to invest time and effort, and in that domain Pandora FMS offers decades of proven experience.
6) Because “we just know that we do not know anything”. Without the aim to go in depth into the philosophical field, we must always pay attention to constant learning. Perhaps the documentation of your software is quite poor and it would be a good time to change it. Pandora FMS has forums of users of the Community version, documentation, tutorials and this blog that you are reading today. With all of them you can learn at your own pace, but if you want or rather need a push – certification included – check out our training in monitoring. Psst, with the Enterprise license this last one is included, don’t miss the chance!
7) Another reason to change your monitoring software is indeed not to change anything! Perhaps you simply need a monitoring contingency plan or an alternative of audit or measurement of result comparison. For example, I am a client of DigitalOcean, a company that provides virtual computers and that has both monitoring processes (Software Agent type) in each droplet (virtual machine), as well as at large-scale with Prometheus in its hypervisors. However, remote checks and Pandora FMS Software Agents are more useful for me, which also helps me verify information. It is not that I don’t trust the monitoring software implemented by my own provider, but rather you must always have different options, see the full horizon to be able to choose the way forward.
8) Because two are better than one: eHorus is a remote access program that can later be integrated with Pandora FMS. EHorus remote access software can be integrated into Pandora FMS, so you may combine computer – or client – monitoring, find out the bandwidth consumption of your network, the software installed on your PC, see logs and events and connect to the computers you need from the monitoring console itself. Test without commitment nor cost for up to 10 devices.
9) Because three is better than one. We add another reason to change your monitoring software, Pandora ITSM Fully compatible and integrated with Pandora FMS. Pandora ITSM incorporates your forms for clients in your own Web, feeding Pandora ITSM directly through API. In addition you will have access to lots of articles, downloadable files, multi language, categorized and with access control to manage incidences. Monitor changes and performances on your machines with Pandora FMS agents!
10) Is the “billiard ball” with the number ten missing? You yourself can add the tenth reason to change your monitoring software. Tell us about your experience with other software, you can leave your comments below, visit our channel at YouTube, Linkedin or Twitter.
Would you like to find out more about what Pandora FMS can offer you? Learn more here.
You can also enjoy a FREE Pandora FMS Enterprise 30-day TRIAL. Installation in Cloud or On-Premise, the choice is yours !! Get it here!
Last but not least, remember that if you have a reduced number of devices to monitor, you can use Pandora FMS OpenSource version. Find more information here!
Do not hesitate to send us your questions. The team behind Pandora FMS will be happy to assist you!
Do you know what good teleworking practices? We tell you everything!
Hey dear readers! This time our article may be particularly useful for you, are you teleworking while reading this? Yes? Well be honest, are you wearing your pants? You can shake your head with a smile on your mouth, or simply say to the monitor in front of you: “No, I NEVER wear my pants during work hours, THANK YOU TELEWORK!”. Yes, teleworking has brought very good things, such as less stress, reduction of expenses for the company, and decrease in absenteeism. For this reason and to emphasize that even if we do not wear pants we are responsible and mature, today, in Pandora FMS blog we want to discuss good teleworking practices.
As you know, in this blog we are more empirical than Aristotle and David Hume together, so we will address the question of good telework practices by asking our heads of department. They, more than anyone else, take charge of the situation and manage it among their workers, with enviable leadership and musky affection.
What are the good teleworking practices you promote in your department?
Sara Martín, head of Human Resources, Training and Community of Pandora FMS.
Between HR and management, we had to start by creating a manual of operating procedures for teleworking, or good teleworking practices. Creating basic rules so that people would be as comfortable as possible reconciling their family and work lives. Among the good teleworking practices that we try to carry out, are the following: a flexible schedule, breaks during the day, give importance to security with access through a VPN, a safe antivirus, use internal tools such as the company’s chat or videoconferencing, and then each department chooses how to organize themselves, there are some who prefer a daily meeting, another twice a week, etc.
Then there are several recommendations we make for anyone who is telecommuting:
Have your own fixed schedule routine. (Breakfast, break, lunch.)
Have your computer protected by password. So your child and cat don’t write messages when you’re in the toilet or eating.
Your computer must have up-to-date security updates.
Socialize through collaborative tools, don’t let telecommuting make you forget that you’re still working with people.
Take special care of communicating with the rest of the team. Use the webcam and video conferencing whenever possible.
Separate your personal life from your work life. When your workday is over, you should be able to disconnect.
Ask your colleagues for help as if we were all in the office.
Go to HR whenever you need it, their job is to help you.
Take into account digital disconnection: workers have the right to forget about emails or work calls outside of their working hours.
Respect the right to privacy.
Kornelia Konstantinova, Head of the Marketing and Communication Department.
In Communication, we are proud of our “Decalogue, formal but incredible, for good teleworking”. A series of flexible guidelines like the junco and robust like the quebracho. Someday we will sculpt them in stone or make T-shirts. Here I list them:
Flexible working hours.
Camaraderie.
Team trust.
People come first.
Take care of mental health.
More efficient and less time-consuming meetings.
Information security and data protection.
Physical activity and exercises.
Healthy eating.
Rest.
They are so practical and versatile that they can be both good teleworking practices and a self-help manual for dealing with the fact that you have reached thirty.
Daniel Rodríguez, Head of the department of QA
The fundamental rule for us QA is basic, but of course important and conciliatory with the work: Keep a schedule as regular as possible.
Keeping track of work time is the key, since being from home it is very easy to get distracted and keep on thinking “I finish this and I’m done” or some similar procrastination thought.
Regarding this, we could add an appendix to our motto:
Try to have a different work space than that for leisure. You know, if possible another room or table… Especially not to continue in the same place once the day is over. You have to discern.
Mario Pulido, head of the Department of Systems/Support
I no longer know if they are good practices or not, but the truth is that from minute one we started teleworking, the contact between the entire team has been constant. We have a meet constantly open where we are present when we are available, or need each other, and where we comment on whatever happens to us. Just like we would in the office.
The team also shares their screens when a test or intervention is complicated, and the rest of us are there, to support and indicate possible solutions. That way, despite not being together, we never feel alone.
Sancho Lerena, CEO of Pandora FMS
The key point here is to understand well what is expected from you and define objectives. Share those goals and how to measure them. The more mature a person is, the more freedom they can have. In the end, it all comes down to setting goals and being able to evaluate them. Having good communication so that when there are problems, those objectives can be redefined or corrections made to reach the established goals. Everything else is incidental, and what the global Covid pandemic has shown us is that a paradigm shift is possible. Have I already said that we are a 100% teleworking organization?
Ramón Novoa, leader of the Artificial Intelligence department of Ártica PFMS
First of all, it is very important to define what is understood by teleworking and adapt the procedures to each situation. An offshoring team is not managed the same way as one that occasionally allows teleworking.Communication in hybrid teams in which one part works remotely and the other actually present is particularly complicated.
Examples of good practices that have helped us so far are: maintaining good documentation, easily accessible and without duplication; establish procedures and expedited communication channels that avoid misunderstandings; define clear goals and measurable objectives; etc.
Having the right tools to carry out these tasks is no less important, but today it is easier thanks to the number of applications available in the cloud. In any case, they must be well documented to avoid compatibility problems. Open formats can be of great help.
Finally, I would like to highlight the importance of fostering informal communication to improve personal relationships. We have to be able to take advantage of the many advantages that teleworking offers, and minimize the inconveniences.
And that was it, I know that our program of testimonies of the departmental heads about good telework practices seemed short. Any day we will create something like the Opra’s…, “Pandora’s”, what do you think? Would you like it? We would make loads of money with that talking show.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here .
If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
And if you want to keep up with all our news and you like IT, releases and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers.
Double authentication with Google Authenticator in Pandora FMS
Introduction – Internet and its issues
For a long time, the Internet has been an easily accessible place for most people around the world, full of information, fun, and in general, it is an almost indispensable tool for most companies, if not all, and very useful in many other areas, such as education, administration, etc. But, since evil is a latent quality in the human being, this useful tool has also become a double-edged sword.
We speak of “cyberattacks”, or computer attacks. These “cyberattacks” are a set of code in a programming language, usually C, prepared to exploit a vulnerability in a system, or to find them. Although the most effective ones are created by people with great computer knowledge, some use already created programs, although yes, less effective than the first. That is why we hear news about cyberattacks on a daily basis. With each year that goes by, these cyberattacks multiply exponentially, being one of the biggest concerns for companies around the world. Because of this, protecting your system must be your highest priority in fighting against this problem.
From firewalls to applications, you must add all the security measures at your reach to your computing devices, both in the work environment and in your personal space, to guarantee the highest security. Although cybercriminals are more focused on attacking companies, obviously because there are more benefits, it never hurts to protect your personal life.
Problem Description – Password Attacks
Among all possible computer attacks, one of the most frequent ones is the brute force attack or password attack. This attack consists of using a series of commands or programs, together with a combination of alphanumeric characters and sets of symbols, simulating a username and password. Later, these data are launched against the entity, application or web page, as can be the case of Pandora FMS. It has that name because it is a constant attack, it does not try to exploit any specific vulnerability, but simply seeks to crack the username and password by constantly launching that code, with all possible user and password combinations. Although there are thousands of other attacks, we will focus on this one in particular, since it is one of the easiest to perform.
Solution – Google Authenticator
One of the simplest and most useful solutions to try to minimize this problem is to use a two-step authentication program (2FA). The most recommended and used one is the Google version, called “Google Authenticator“. It is a mobile application, available for both Android and iOS. This application consists of linking our account with the application itself, by scanning a QR code. Once you scan it, it will show you a 6-digit number that you must enter to verify your identity and link your account with Google’s. After having linked it, the application will provide you with a 6-digit number, with an expiration of thirty seconds, which you must enter each time you log into your account, and thus verify that you are the owner of said account.
Pandora FMS offers the possibility of configuring this application integrating its use with the server. That way, when you want to log in to Pandora FMS console, it will be necessary to enter a “code”, and thus guarantee that only the user who can obtain that code through the application can log in.
Configuring that tool is as simple as going to the “Authentication” section within the “Setup” tab.
Here the task will be as simple as activating “Double authentication” and, if desired, forcing its use on all users. Once it is done, click on “Update”.
When updating, a window will appear asking you to download the Google authenticator application. Remember it is a mobile application, and, although the link redirects you to the website, you may download it from the Android Play Store or Apple App Store. If you already had it, you would only have to click continue.
Then, open the application and scan the QRcode with it. This will add an account to your application’s registry where a 6-digit number will appear. In case this fails, click “Refresh code”. If everything goes well, continue.
The last window that will appear will be to ask you for the code that was generated in the application, to finish linking your Pandora FMS user with the device where the codes are generated. You will only have to enter this code and you will have finished the configuration of your double authenticator.
To do the test, log out of Pandora FMS and re-enter the credentials of your user, and this time instead of showing you the console it will ask you to enter the application code.
Farewell
Once you have correctly configured this tool, your system will be somewhat more secure, although of course, that does not mean that it is impenetrable, since every day, the so-called “hackers” create new codes to violate this type of security. That is why we always recommend changing passwords frequently and keeping all your devices updated to the latest version of their programs and software in general and continue adding new security measures throughout your network.
What is Active Directory and how to use it with Pandora FMS?
As you may already know, in this blog, we’re so into answering the big questions. After answering in previous episodes what the meaning of our existence is or explaining everything you need to know about Office 365 Monitoring, in today’s episode we are going to discuss what Active Directory is. I hope you are very comfortable sitting in your respective gamer chairs or in your two-seater sofas, because here we go!
What is Active Directory?
Active Directory is a tool that provides directory services, which entails many benefits in the business sector. Many companies have a large number of employees, they need a connected device to do their work, and there we have Active Directory, with it we can build a network of devices for users or employees.
How to collect information on user and service monitoring with Active Directory?
We already know that obtaining information is a very important section of monitoring. All these data can be very useful for us to see the status of something, find a possible problem or simply improve a certain system. Active Discovery is a process by which information can be collected while managing everything in a very simple way. We will be able to see what we need from a single computer, which will make the task much easier, since we will not have to act on each of the devices.
In this article, we are going to give you the guidelines to configure Active Discovery and be able to use it.
What are the benefits of using Active Directory?
It is focused on professional and business use. It allows you to manage everything easily and without having to intervene in the computers of each user, which saves a lot of time.
Store data in real time. With data related to users and their authentication.
User authentication. If everything’s ok, the user’s information will reach the computer. This means that if one computer breaks down, you will be able to access it from another with authentication.
Easily manage all servers and applications, ensuring that everything runs at peak performance.
Prevention of replication errors. To verify that all replications are being performed optimally. Active Directory monitoring is essential, since you will obtain accurate information from them.
Obtaining information from remote sites and much more…
And here Pandora FMS comes into play
It is our standard: One of the principles of Pandora FMS is its flexibility. It is highly configurable and by using plugins you will be able to do almost anything in terms of monitoring. Making use of Active Directory in Pandora FMS is quite simple. You can use a specific plugin with which to collect different types of data. Like, for example, the number of users connected or inactive to be able to see them from the console. The data you may obtain is easily configurable from a simple txt, which will be the configuration file. The plugin can be found at the following link: https://pandorafms.com/library/active-directory/
Once downloaded, install it on the console.
This short and simple process that will offer you great advantages will be explained below.
What is needed for the plugin to work?
Powershell v3.0 or higher.
Active Directory Powershell Module.
Repadmin. The plugin needs a configuration file that will be divided into the following blocks and will be called “adparams.txt” :
In user, you can choose whether to see the full list of all users or one in particular. In unused, a list of users that have not been used for at least two months. 1 to enable it and 0 to disable it.
Spn allows you to see spn suffixes. 1 to enable and 0 to disable, as in the previous point.
Upn allows you to see spn suffixes. 1 to enable and 0 to disable.
You may also add the test block, which retrieves the information from the AD diagnostic tests that the dcdiag tool returns. 1 to enable and 0 to disable. Example: #tests Tests = 0
We can run the plugin manually, calling executable.exe, writing the following output through the powershell terminal: [path_plugin]\active_directory.exe [path_conf]\adparams.txt
It is recommended to save the file in pandora_agent/util.
In the remote configuration of the agent that we have installed, add the following:
When the interval goes by, modules collected by the users of Active Discovery, the connectivity, the status of the service or the suffixes spn and upn will be obtained.
Execution from the web console
To be able to run it from the console, the plugin will be distributed through collections. In configuration -> collections, create a collection, it will be named “Active Directory plugin” and short name “Ad_plugin”, in the following image you can see the process.
Go to files after creating the collection :
Click on “Upload Files”:
And upload the executable of the plugin and the configuration file that we created previously, then return to the previous menu and click “Create a file again” and later “Update”. In the agent where you want to use the plugin, go to the collections section and add it:
Next, go to “Agent plugins” and add the route with the plugin execution. In this case, as it is by means of collections, they will be created in the software agent installation path.
The path by default would be the view in the image (2).
Modules generated by the plugin
These will be the modules returned by a standard run.
Monitoring:
● AD Users
● Unused AD User
● AD Schema Master
● AD Root Domain
● AD Forest Domains
● AD Computer DNS Host Name
● AD Global Catalogs
● AD SPN suffixes
● AD UPN suffixes
● Connectivity
● Replication admin
● Service DNS status
● Service DFS Replication status
● Service Kerberos Key Distribution Center status
● Service Active Directory Domain Services status
● Test Advertising status
● Test FrsEvent status
● Test SysVolCheck status
● Test KccEvent status
● Test KnowsOfRoleHolders status
● Test MachineAccount status
● Test NCSecDesc status
● Test Netlogons status
● Test ObjectsReplicated status
● Test Replication status
● Test RidManager status
● Test Services status
● Test SystemLog status
● Test VerifyReferences status
Service NetLogon status
● Service Intersite Messaging status
And this is how they would look like in the created agent:
And, up to here that would be everything required to be able to make the plugin work. It was easy, huh? I hope many things in this life, but above all I hope this article was useful, especially to help you understand better Active Directory and how to use it in such a simple way in Pandora FMS. I will not take anymore of your time, indeed, I say goodbye, not before, of course, encouraging you to read other articles on the blog that may be to your liking and taste.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here .
If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
And if you want to keep up with all our news and you like IT, releases and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter not forgetting Facebook . We even have a YouTube channel , and with the best storytellers.
In this article, we will introduce you to the new Pandora FMS Roadmap for the next 24 months (June 2021 – June 2023). For its creation, we had the participation of our clients and partners, who, through a survey, helped us choose all kinds of features and their priority.
It’s been really satisfying for us to complete this challenge, as it was one of those enthusiastically proposed among our closest goals.
Warp update (Q2).
Command center (Q2).
New agent inventory report (Q2).
Graphic agent installer for Mac (Q2).
Services report (Q3).
Policy auto-implementation (Q3).
New visual console elements (Odometer, Simple graph) (Q3).
Netflow: Data monitoring of the flows defined in the filter (Q3).
AWS Monitoring improvements with Discovery: RDS for postgreSQL, Autoscaling groups, VPCS, Lampdas.
Azure Monitoring improvements with Discovery: Databases, Storage, Data Factory, PostgreSQL, Event hubs.
GIS Alerts (2022+).
IPAM Report (2022+).
Data consultation to agents in real time (2022+).
Public/private certificate validation system in remote agent configuration (2022+).
Load Balancing in API/Console (2022+).
Automatic remote inventory with satellite (SNMP, WMI, SSH/Linux) (2022+).
New view to show systems currently affected by a scheduled downtime (2022+).
SNMP trap reports (top-N by source, type of trap, etc) (2022+).
Desktop application to configure Pandora FMS agent and see its status (2022+).
IOT on Satellite server (2022+).
Warp Update
A unified system that allows updating console, server and agents. Fully integrated into the console, which does everything with a single click without having to execute commands, copy files or pray for everything to go smoothly. Fast and centralized, in the case of deployment of centralized updates through the Metaconsole.
Command Center
Command Center is the long-awaited evolution of the Metaconsole, which will allow dozens of nodes to be managed in a totally transparent and centralized way simultaneously, without having to manually synchronize any element.
Security Center
An innovative way to manage server and workstation security, fully integrated with system monitoring.
APM in source code
We want to reach the last frontier of monitoring, the code in applications to measure their times and detect bottlenecks and overloads, combining all the information on the same platform where the infrastructure, servers and application metrics are.
Trend modules
Create a new type of predictive module that compares two time ranges and evaluates, in a percentage or an absolute way, their differences. These modules can be used in alerts, graphs or reports.
E.g.: Access router outbound traffic is 25% higher than last month. This month there are 22 new users compared to the previous month.
Centralized agent update
Update agents centrally from the console. A current enhancement to the remote agent distribution system.
Network computer configuration management
Being able to edit “download” and “upload” full configurations of network equipment through several protocols (TFTP, Telnet, SSH) in order to centrally manage network equipment such as switches and routers. Some of its purposes:
Schedule configuration backups, restore trusted configuration versions with a single click.
Detect changes in real time and know “who”, “what” and “when” about configuration changes.
Upgrading device firmware.
Save time by automating time-consuming and repetitive tasks using templates and configuration application scripts.
Make sure changes made to running configurations are saved.
Compare NVRAM (running) configurations with startup ones (saved) to identify changes that need to be saved.
Quickly identify and correct unauthorized or failed changes (restoring backup manually).
Compare configurations with base configurations to identify and reverse unwanted changes.
Netflow
Be able to integrate simple data from a Netflow filter as a Pandora FMS numerical module, to be able to, for example, set alarms when the traffic of a certain flow exceeds its threshold or to be able to measure SLA in flow traffic.
Capacity planning modules
Modules that operate like Capacity Planning reports and can estimate in a future time threshold, e.g.: 1 month, 3 months, the value of a given module, estimating its growth based on a statistical analysis of its history.
Policy auto-implementation
Add a policy self-enforcement system (optional) that works well. Either based on the detection of new elements (in the added group or directly in agents in the policy itself) or even just the possibility of scheduling policy application at a certain time interval.
Data consultation to agents in real time
Upon manual request: configuration data, status, hardware status, OS items, logs, etc., in real time, all from a library of predefined elements. Without complementary configuration, it would only need the deployment of an additional agent to that of Pandora FMS. These data are only for screen display, not for making alerts or reports. The data range would be very broad and standard. It requires direct connectivity from the console and the agent that must listen on a specific port.
Service report
Reports to show service SLA compliance, numerically (%) and with a histogram.
IPAM reports
New reports to, among other things, show the usage percentage of each network, and some other information of interest that appears on the IPAM screens but that cannot be included in reports.
GIS alerts
Be able to send alerts when an agent leaves a delimited coordinate zone, which is often called “geo-fencing”.
Load balancing in Console and API
Provide a standard system that allows load balancing in the console and the API, in order to scale and distribute the load. Perfect for environments where the use of the API is intensive or the console is used in multi tenant environments.
IoT
Offer support to the Satellite Server to natively support modbus and MQTT protocols.
It’s been hard work, but thanks to Pandora FMS employees and our partners and clients, we achieved this Roadmap 2021 – 2023 that will make our work easier in the future and speed it up.
Be aware that we’re not saying that you are in cloud nine, but that you may most likely be using the cloud. That is, if you use Google mail, Microsoft Office 365 office suite or you take a photo with your cell phone and then it gets automatically uploaded to iCloud or something similar, you are using the cloud.
The cloud, as an abstract concept, encompasses a series of technical terminology such as SaaS, IaaS, PaaS, etc. The good thing about the concept of the cloud is that you can guess what it does thanks to the metaphor: we do not know where our data are, or how they get there, nor does it matter much for us, because it is far away and it does not affect us. The great success of the cloud of the 21st century has been to find an especially powerful metaphor that omits the complexity behind that technology and gives us peace of mind.
The concept of using third-party infrastructure for “our stuff” is the oldest thing in computing. In fact, back in the 60s of the last century, most computing worked like this. You connected to a large machine from a computer that was not as such, but a screen and a keyboard. Then the microcomputer craze turned around and every computer was self-sufficient. Now, almost a century later, we have rediscovered that it is more efficient to have everything centralized in one big system.
I have nothing against the cloud. Well, my life is not at stake, unless for example, I entrust the IT infrastructure of my business to the cloud. This is what happened to a number of companies in Asia, such as CITEX or BitMax that used the Amazon cloud (AWS) to host their Bitcoin exchange service (Exchangers), well, them and also the Asian sites from Adobe, Business Insider, Expedia, Expensify, FanDuel, FiftyThree, Flipboard, Lonely Planet, Mailchimp, Medium, Quora, Razer, Signal, Slack, Airbnb, Pinterest, SendGrid and a few hundred more. The cloud is not infallible, the cloud is comfortable.
Today many companies have relied so much on the cloud that it is impossible to take a step back, get out of the cloud, because they would literally have to remake the system with another technology. The cloud is easy but implies total dependence on the provider, especially in technologically optimized systems such as Amazon’s. It’s too good a candy to resist.
Realistically, if you’ve already risen to the sky and are floating with the clouds, and the technology that supports your business is floating above your head, it may not be easy or comfortable to go back, in fact, you may have probably already realized that the cloud is not cheap at all and the costs are increasing over time, and are difficult to predict.
Well, it’s already in, and it’s not going to change, so you should at least be able to keep an eye on what your provider is doing. Monitor the quality of service they offer you and make sure for yourself, because who is watching the watchdog? That’s right, do it yourself, trust no one, do it with your own systems, don’t use a cloud system to monitor another cloud system, put your feet on the ground and buy yourself an umbrella, just in case it rains.
The “lifetime” model: onPremise
On the contrary, we have the classic model of “buying the software” and using it however you want, wherever you want and, whenever you want you change programs without much thought. Oddly enough, this is really the new model, the pay-per-use model that SaaS has copied predates conventional software licenses. The onPremise model gives you the right to use the software on your own computers, in your own facilities and where the manufacturer or software owner does not have any access or rights. The only requirement is to pay for it and use it under the conditions approved by the license you acquired.
Cost analysis: onPremise vs SaaS
The onPremise model has some undeniable advantages, the main one being data security. As it is running on your systems, you own both the information and the processes that use that information. This has legal and business implications, since changing providers can be easier than when you use its SaaS equivalent.
Although it may seem a lie, in the long term the SaaS model is more expensive than the onPremise model, and above all, with the onPremise model it is much easier to estimate the Total Cost of Ownership (TCO) in the medium term. This can be easily demonstrated if we compare the costs in the subscription/pay-per-use model (SaaS) and the license ownership model (onPremise) for one, three and five years.
Suppose a SaaS license annual cost is €5,000/year. In this case it is pure OPEX (operating costs).
Let’s picture an onPremise license whose annual cost is €10,000 the first year, and whose annual maintenance cost is 20% (which is the standard in the market). That supposes a renewal cost of €2000/year. In this case, it is pure CAPEX (investment in assets, software).
SaaS
onPremise
1 year
5,000 €
10,000 €
3 years
15,000 €
14,000 €
5 years
25,000 €
18,000 €
There are intangible factors, such as input barriers, higher in onPremise models, and output barriers, higher in SaaS models. It is also true that an onPremise installation involves additional costs: those of infrastructure, operation and training.
In certain types of applications with little added value such as office tools, the SaaS model is here to stay. Office 365 or Google Docs are a perfect example.
In other cases, such as Adobe Photoshop, the onPremise model has been combined with a pay-per-use -subscription- model (but without being SaaS) combined with the conventional onPremise licensing model.
Summary of arguments in favor of each model
SaaS
onPremise
Security depends on the provider.
Security depends on the customer.
The responsibility for the operation lies with the supplier.
The data is owned by the customer.
Savings in infrastructure and operating costs.
Lower long-term license costs.
Ease of financing (monthly or quarterly payment).
Easier-to-plan long-term costs.
Opex
Capex
Lower input barriers.
Higher input barriers.
Higher output barriers.
Lower output barriers.
Faster deployment times.
It is easier to integrate with the rest of the business processes.
In this, our competent blog, we boast of always giving you good advice and providing you with the technological information necessary for your life as a technologist to make sense. Today it is the case again, we will not reveal the hidden secret about the omnipotence of Control/Alt/Delete, but almost. Today in Pandora FMS blog, we give you a few tips for safe password management.
Safe password management
The purpose of this article is for users to be responsible for keeping their coveted passwords or authentication information safe when accessing confidential information. Because think about it, dear reader, how long ago did you come up with your first password? Surely it was to enter your select club in the treehouse. Maybe you even still choose the same for your social networks, Netflix or office pc. Was it as ordinary as your birth date? Your name and the first two acronyms of your surname? “RockyIV”, which was the name of your fourth favorite pet and movie? I don’t blame you, we have all been equally original and carefree when choosing a password.
But that is over! Many things already depend on this password, on this motto or pass that must include more than eight characters and at least one capital letter and one number. Your company security is not a game, damn it! There is a lot of mischief and felon out there that can put you and your businesses in a loophole, because of a vulnerability such as having a poor password! But do not worry, we will help you, we will talk about safe password management. We are Pandora FMS blog, we like potato salad, Kubrick movies and fighting against injustices!
Recommendations for safe password management
*Obvious but vital fact: User IDs and passwords are used to check the identity of a user on systems and devices. I just point that out here as an outline in case someone is so lost that they don’t know this. I repeat that we are talking about strong password management, so knowing what a password is is a must and saves time.
Said passwords are necessary for users to have access to information, normally, even if the merit is not recognized: capital information in your company. User IDs and passwords also help ensure that users are held accountable for their activities on the systems they have access to. Because yes, telereader friend, users are responsible for any activity associated with their user IDs and passwords. For that reason, it is very important for you to protect the password with your life and comply with the following policies related to them:
Users may not, under any circumstances, give their password or a password indication to a third party. *This seems obvious, but trust me, it is not. People sneak passwords like they’re office whispers or reggaeton choruses.
Users will not use user identifiers or passwords of other users. *As we can see, in this case, sharing is not living.
Users must change initial passwords or passwords received as temporary “reset” passwords immediately upon receipt. *For me, this is the most exciting and creative part, you never want to set the abstract code they give you, you want to improvise, imagine, CREATE!
Users should change their passwords if they suspect that their confidentiality may have been compromised, and immediately report the situation as a security incident. *Don’t be ashamed of yourself, admit that someone may have violated your secret and repent before it’s too late.
Users should not use the “remember password” function of programs. For example, if an application sends users the message of “automatically remember or store” the user’s password for future use, they will have to reject it. *This is a piece of information you did not know, huh? Well, it is as interesting as it is important.
Users should not store passwords without encryption, for example, in a text file or an office document. In this case, this document must be protected with access control.
When an administration password must be communicated, never send by the same means, the user and the password. For example, the user should be sent by email and the password by instant messaging. *I know that sometimes you try to save time, but with these things you better take your time and do not risk it.
Users should not set the password on a post-it on the monitor, nor on the table, nor in the drawer or “hidden” in another place in the office or among your personal belongings. *This is one of the big mistakes everyone makes. Yes, post-its or notebook sheets have always helped us, but this time they are too obvious to keep such a big secret.
Users should not use the same password for two systems or different applications. *Sorry, but you will have to memorize more than one. But rest assured, if a chimpanzee could recognize the descending sequence of nine numbers, someone who graduated from elementary school can do better.
Users who find out the password of other users must report it, ensuring it is changed as soon as possible. *Here fellowship first and foremost. It is not only right hugging after company dinners. Camaraderie above all!
Users must change their passwords at least once a year, or when indicated by the system, and in the case of administration passwords every 180 days, or in the event of changes of personnel in the company that may know them.
If now you are afraid because you do not have a strong enough password, it’s normal, but I repeat, calm down, follow the following rules for passwords creation (if the system supports them) and nothing will go wrong:
a) Passwords must be at least six characters long.
b) Passwords must not be easily predictable and must not be contained in dictionaries. For example: your username, date of birth, or 1234, we all know that one.
c) Passwords must not contain consecutive repeating characters. For example: “AABBCC”.
d) Passwords must have at least an alphanumeric character, a numeric character, and a special character.
Good, and so far that was the lecture about being responsible that you must assume and internalize if you want things to go smooth at least in terms of passwords and vulnerabilities. Oh, nothing to thank us for! You know: “Life is beautiful. Password yourself”. Look, that could be your new password, right? No, the answer is NO! REMEMBER EVERYTHING WE LEARNED TODAY IN THIS ARTICLE!
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here .
If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
And if you want to keep up with all our news and you like IT, releases and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers.
Everybody knows Nagios, or at least… everybody should know Nagios.
It is a useful tool to monitor systems and networks that thousands of users worldwide have been using for more than fifteen years.
Then, if it is so popular, on what are we based to say that it is not a good solution?
Careful! We’re not saying that Nagios is a bad tool, but many times the fact that it is a solution so extended and used everywhere in the world may make us assume certain things about the product and leave aside the most important question: What is what your business really needs?
After almost 20 years of experience in server, network and security technology, we dare to list some of the reasons for which it is not a good idea to assemble a Nagios system to manage a monitoring service on which business continuity depends.
Nagios technical features
Nagios has been with us for almost a quarter of century. And that’s good, but it also entails some limitations.
1. It’s not a product, it’s a project
When Nagios is installed in a company, it is installed specifically for that company, so you will not find two Nagios alike, installed and deployed in the same way. Their internal configuration files will be incredibly different.
That is always a problem in the long run.
Why?
If two technicians write a script differently and that makes it difficult to interpret them by a third party, imagine what would happen with a configuration mega-script as complex as that of Nagios.
Although two environments and two problems may be identical, the technician of the first environment will have problems understanding the Nagios installation of the second one and the technician from the second one will have trouble with the other Nagios installation.
2. Plugins Puzzle
The standard features of basic Nagios are extremely limited.
The solution to extend it is to install plugins, addons and third-party extensions, which makes it an even less standard ecosystem.
Simply put, Nagios is a puzzle of pieces made by people who have nothing in common and who lack a joint overview.
3. High dependency
Since Nagios is a tailor-made project, it depends on a limited group of people (some of whom are out of your reach) who will be there to hear your complaints, but may not even be able to solve the problem, as there will not be a final person in charge.
This team will be the only one that can maintain the installation of Nagios, therefore the knowledge will not be easily scalable or transferable to third parties.
4. Poorly scalable and flexible
Nagios is not intended for changing environments. Its configuration is static, cumbersome and complicated to integrate into automatic provisioning processes.
As it is well known, scalability is not Nagios’ strong point.
Business Aspects with Nagios
A tool like Nagios brings a number of limitations to your business. Not only for what it is able to do, but for the cost of maintaining it.
Particularly:
5. Software unit and human dependence
If monitoring is the key to your business, to growth or to maintaining current production, with Nagios you will stop relying on a software vendor and start relying on a small team of specialists.
The ones known as Gurus.
6. Unsolvable problems
Any problem that your technicians do not know how to solve will be unsolvable, especially if the implementation is very customized, since no one outside your company will know what changed or how to solve it.
Systems are audited in large companies and, in most cases, product manufacturer support is essential.
7. Large derivative costs
Although initially Nagios may seem cheaper than other solutions (you save yourself a recurring license cost), the problems arising from its maintenance will make it much more expensive in the long run.
8. Risk of losing support
If the technical team (usually a single person) leaves the company, it will be impossible to maintain the monitoring project.
There will be no support, training or documentation that can replace the “guru” who assembled it and, at best, it will be like starting from scratch.
At worst, it will take longer to understand what is assembled and how it is managed correctly.
9. Human Cost vs License Cost
Paying the salary of a Nagios guru for three years is much higher than the cost of a license, a training course and a systems technician.
After all, monitoring with Nagios takes up the time of a highly qualified technician who specializes in a task that bears no benefit.
Especially because it may not have to be carried out.
10. Alternatives available
There are dozens of software applications with license, support and training plans that will save you lots of headaches.
You probably don’t need all the power that Pandora FMS Enterprise can offer you and it’s enough with PRTG or What’s up Gold. Or maybe with Pandora FMS free software version.
We only recommend Pandora FMS Enterprise to those organizations that have needs that justify the versatility, power and flexibility of our tool.
Do an exercise, ask five IT technicians -of any profile- what SNMP means.. If you’re close with them the better, so that the first thing they do is not go to Wikipedia to boast. Hopefully, they might tell you what they said to me when I was working in networks.
“Security is Not My Problem”
Taking into account that the SNMP protocol is one of the monitoring bases, and a system that has been in use for more than thirty years, this answer, “Security is Not My Problem”, sums up the current monitoring situation quite well: ignorance, laziness and lack of interest in monitoring security.
By the way, we talked about SNMP in another article on our blog and I will give you a teaser in advance, it means Simple Network Management Protocol and it comes from 1987.
Considering that monitoring is “key to the kingdom”, since it allows access to all systems and even access many times with administration credentials, shouldn’t we take security a little more seriously when we talk about it?
Recent vulnerabilities in well-known monitoring systems such as Solarwinds or Centreon make the need to take security seriously in the implementation of monitoring systems increasingly urgent, since these have a very strong integration with systems.
In many cases, security problems are not so much about one piece of software being much safer than another, but about poor configuration and/or architecture. It must be taken into account that a monitoring system is complex, extensive, and in general, it is highly adapted to each organization. Today it was Solarwinds, tomorrow it could be Pandora FMS or Nagios.
No application is 100% secure, nor is any corporate network secured against intrusion, whatever the type. This is an increasingly evident fact and the only thing that can be done about it is to know the risks and assume which ones you can take, which ones absolutely not, and work on the latter.
Safe monitoring architecture
It is essential to keep in mind at all times that a monitoring system contains key information for a possible intruder. If monitoring falls into the wrong hands, your system will be compromised. That is why it is so important to devote time to the architecture of your monitoring system, whatever it may be.
Carry out a first analysis, collecting the requirements and scope of your monitoring strategy:
Identify what systems you are going to monitor and catalogue their security levels.
Identify which profiles will have access to the monitoring system.
Identify how you will obtain information from those systems, whether through probes/agents or remote data.
Identify who is responsible for the systems you are going to monitor.
The architecture of a system will have, whatever the chosen software, the following elements and will have to take into account its network topology, its resources and the way to protect them properly:
Information display interface (web console, heavy application).
Data storage (usually a relational database).
Information collectors (intermediate servers, pollers, collectors, etc.).
Agents (optional).
Notification system (alerts, notices, etc.).
Monitoring system securing
No matter how correct the implementation of a system, its architecture and its design as a whole is, if one of the elements that make it up is violated, the damage it may suffer by a malicious attack compromises the entire structure. For this reason, in security there is a saying, “Security is a chain and your real security always depends on its weakest point.”
This list of security concepts applied to the architecture of a monitoring system can be summarized as the features that a monitoring product must have to ensure maximum security in an implementation:
Encrypted traffic between all its components.
High availability of all its components.
Integrated backup.
Double access authentication.
Delegated authentication system (LDAP, AD, SAML, Kerberos, etc.).
ACL and user profiling.
Internal audit.
Password policy.
Sensitive data encryption.
Credential containers.
Monitoring of restricted areas/indirect access.
Installation without superuser.
Safe agent/server architecture (passive).
Centralized and distributed update system.
24/7 support.
Clear vulnerability management policy by the manufacturer.
Monitoring infrastructure basic securing
The management console, monitoring servers and other elements should never be on an accessible public network. The console should always be protected on an internal network, protected by firewalls and, if possible, on a network independent from other management systems.
The operating systems that host the monitoring infrastructure should not be used for other purposes: for example, to reuse the database for other applications, nor the base operating systems to run other applications.
Safe and encrypted traffic
You should make sure that your system supports SSL/TLS encryption and certificates at both ends at all levels: user operation, communication between components or sending data from the agent to the servers.
If you are going to use agents in unsafe locations, it is highly recommended that you force all external agents to use certificate-based authentication at both ends, to avoid receiving information from unauthorized sources and to prevent information collected by agents to not travel transparently.
On the other hand, it is very important for you to activate encryption on your web server to provide an encrypted administration console and prevent any attacker from seeing access credentials, remote system passwords or confidential information.
Full High Availability
For all elements: database, servers, agents and console.
Integrated backup
The tool itself should make this as easy as possible, as settings and data are often highly distributed and consistent backup is complex.
Clear vulnerability management policy by the manufacturer
Every day, dozens of independent auditors test the strengths and weaknesses of all kinds of business applications. They seek to gain a foothold in the sector by publishing an unknown ruling to increase their reputation. Many clients, as part of their internal security management processes, execute external and internal security audits that target their IT infrastructure.
Be that as it may, all products have security flaws, the question is: how are those flaws handled? Transparency, diligence and communication are essential to prevent customers from having problems derived from vulnerabilities in the software they use. It is essential that there is a clear policy in this regard, so that it is known which public vulnerabilities have been reported, when they have been corrected and if a new one is detected, the steps to follow for notification, mitigation and distribution to the end customer.
Dual authentication system
Pandora FMS has an -optional- system based on google authenticator that allows forcing its use for all users for security policies. This will make user access to the administration console much safer, preventing that due to privilege escalation the system can be accessed as administrator, which is, at best, the highest risk that can be run.
Delegated authentication system
Complementary to the previous one, you can delegate management console authentication to authenticate against LDAP, Active Directory, or SAML. It will enable a centralized access management, and combined with the double authentication system your access will become much safer.
ACL and user profiling
Identify and assign different users to specific people. Do not use generic users, assign only the necessary permissions and do not use “super administrators”. They are good practices not only for monitoring tools but for any business software implementation with access to sensitive information.
Nowadays, any professional tool to define an access profile for each user will do so in such a way that no user has “absolute control”, but only has the minimum required access to their functions.
Internal audit system
You must have a system in place to record all user actions, including information on altered or deleted fields. Said system must be able to be exported abroad so that not even the administrator user can alter said records.
Password policy
A basic element that allows you to enforce a strict password management policy for access to application users: minimum password size, password type, their reuse, forced change once in a while, etc.
Sensitive data encryption
The system must allow the most sensitive data to be stored encrypted and safely, such as access credentials, monitoring element custom fields, etc. Even if the system itself contains the encryption “seed”, it will always be much more difficult for a potential attacker to access this information.
Credential containers
Or an equivalent system for the administrator to delegate credential use to other users who use said credentials to monitor elements without seeing the passwords contained in the container.
Restricted area monitoring
In these systems, information will be collected remotely by a satellite server and will be available to be collected from the central system (in Pandora FMS through a specific component called Sync server). That way, data can be collected from a network without access to the outside, ideal for very restrictive environments where the impact is drastically reduced if an attacker takes over the system.
Agent remote management locking system
For critical security environments, where the agent cannot be remotely managed once it is configured. This is especially critical in monitoring, since if a system is compromised and its administration is accessed, by the way the system is configured itself, it will have access to all systems from where it receives information. In critical systems, the remote management capacity must be deactivated, even if that makes administration more tricky. The same applies to automatic updates on the agent.
Design of safe architecture for communication with agents
Sometimes known as passive communication. That way, agents will not listen to a port nor have remote access from the console. They are the ones who will connect to the central system to ask for instructions.
Installation without root
Pandora FMS can be installed in environments with custom paths without running with root. In some banking environments, it is a requirement that we meet.
Notification and reporting system (alerts, notices, etc.)
A monitoring system is only useful if it shows accurate information when it is needed. Alert or weekly report reception is the culmination of all the previous work and for that you will have to take into account some “obvious” points that are often overlooked. Protect those systems, wherever they may be.
Periodic updates
All manufacturers now distribute regular updates, which include both bug fixes and security problems. In our case, we publish updates approximately every five weeks. It is essential to update systems as soon as possible, because when a vulnerability is reported, product managers ask external security researchers who have reported the bug, not to publish anything about the vulnerability until a patch is published. Once the patch is published, the researcher will publish the information in more detail as wished, a fact that can be used to exploit and attack non-updated software versions.
In our support, the technician who answers the phone has the whole team backing him up. If there is a security issue and a security patch has to be published within hours. We not only have the technology to spread the patch to all our customers, but also the team to develop it in record time.
Base system securing
Hardening or system securing is a key point in the global security strategy of a company. As manufacturers, we issue a series of recommendations to carry out a safe installation of all Pandora FMS components, based on a standard RHEL7 platform or its equivalent Centos7. These same recommendations are valid for any other monitoring system:
Hardening checklist for monitoring base system:
System access credentials.
Superuser access management.
System access audit.
SSH securing.
Web server securing.
DB server securing.
Server minimization.
Local monitoring.
Access credentials
To access the system, nominative access users will be created, without privileges and with access restricted to their needs. Ideally, the authentication of each user should be integrated with a double authentication system, based on token. There are free and safe alternatives such as Google Authenticator that can be easily integrated into Linux, although outside the scope of this guide. Seriously consider its use.
If it is necessary to create other users for applications, they must be users without remote access (for this, it is necessary to deactivate their Shell or some equivalent method).
Superuser access through sudo
In the event that certain users must have administrator permissions, SUDO will be used.
Base system access audit
It is necessary to have the security log /var/log/secure active and monitor those logs with monitoring (which we will see later).
By default CentOS has this enabled. If not, just check the /etc/rsyslog.conf or /etc/syslog.conf file.
We recommend you to take the logs from the audit system and collect them with an external log management system. Pandora FMS can do it easily and it will be useful to set alerts or review them centrally in case of need.
SSH server securing
The SSH server allows you to remotely connect to your Linux systems to execute commands, so it is a critical point and must be secured by paying attention to the following points:
Modify default port.
Disable root login.
Disable port forwarding.
Disable tunneling.
Remove SSH keys for remote root access.
Investigate the source of keys for remote access. To do this, look at the content of the file /home/xxxx/.ssh/authorized_keys and see which machines they are from. Delete them if you think there shouldn’t be any.
Establish a standard remote access banner that clearly explains that the server is a private access server and that anyone without credentials should log out.
MySQL server securing
Listening port. If MySQL server has to provide service to the outside, just check that the root credentials are safe. If MySQL only gives service to an internal element, make sure that it only listens on localhost.
Web server securing
We will modify the configuration to hide the Apache and OS version in the server information headers.
If you use SSL, disable unsafe methods. We recommend the use of TLS 1.3 only.
System service minimizing
This technique can be very exhaustive. It consists simply of eliminating everything that is not necessary in the system. Thus we avoid possible problems in the future with poorly configured applications that we really did not need and that can be vulnerable in the future.
Local monitoring
All the internal monitoring systems would have to be monitored to the highest level, specially information registries. In our case the following active controls in addition to the standard controls are always recommended:
Active security Plugin.
Complete system inventory (specially users and installed packages).
System logs and server security.
Would you like to find out more about what Pandora FMS can offer you? Find out clicking here .
If you have to monitor more than 100 devices, you can also enjoy a FREE 30-day Pandora FMS Enterprise TRIAL. Installation in Cloud or On-Premise, you choose !! Get it here .
Last but not least, remember that if you have a reduced number of devices to monitor, you can use the Pandora FMS OpenSource version. Find more information here .
Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!
And if you want to keep up with all our news and you like IT, releases and, of course, monitoring, we are waiting for you in our blog and in our different social networks, from Linkedin to Twitter through the unforgettable Facebook . We even have a YouTube channel , and with the best storytellers.
Pandora FMS started as a totally personal open source project back in 2004. I wasn’t even a professional programmer, I was doing Unix security consulting. In fact, I chose PHP but Pandora FMS was my first application with PHP, I knew some things about ASP and my favorite programming language had been C.
A project with a single programmer and no professional users of his software yet is very different from a project with several dozen programmers and hundreds of clients using the software in critical environments. The evolution that Pandora FMS has undergone from 2004 to 2021 is a real case of steady improvement in software engineering.
Fortunately, I did not pay much attention to that subject of the degree, because most of the things that work and that I have learned with practice do not come in a book, nor are they explained at the university, because each software project and each team of people is very different. It may sound cliché, but it is the truth, and it is better to accept it and avoid formulas, because building a solid software product that can grow over time is not trivial at all.
In this article, I am going to talk about our experience, our evolution over time, but above all, about how our engineering processes work today. I have always believed that the most important part of open source is transparency, and that this should apply to everything, not only to software but also to processes and knowledge in general.
Version control system
It is an essential part of any software project. Today the ubiquitous GIT is everywhere (by the way, not everyone knows that Git is the work of Linus Torvalds, original author of the Linux kernel). A version control system helps, in short, a group of developers work without overlapping their jobs.
When the Pandora FMS project started, I was working without version control, because there were no other people. When some people began to collaborate on it, we realized that a simple shared directory was not worth it, because we were overlapping the code and, yes, making backups to save old versions was not a very efficient method.
The first version control system we used was CVS, which we have been using for eight years or more. Around 2008, we started using SVN (Subversion) another slightly more efficient system and it wasn’t until 2013 when we started using GIT and opened our official repository on Github.
Pandora FMS public repository on Github
Since Pandora FMS has an open source version and an Enterprise version -with proprietary code and commercial licenses- we have two GIT projects, one public on GitHub and the other private, which we manage with GitLab. The GitHub version is in sync with our private copy on GitLab at our offices. Some partners who collaborate with us in developing have access to this private repository, and through an extension of our support application (Pandora ITSM) we share all development planning tickets by releases with some of our partners, so that they can see in real time, the development planning based on “releases” and all the details of each ticket.
GitLab ticket view in Pandora ITSM/em>
Release ticket view
Development methodology used in Pandora FMS
At Pandora FMS, we have been using our own methodology from the beginning, although we have borrowed many ideas from agile methodologies, especially from SCRUM. From a life cycle point of view, we use an adaptation of the Rolling Release methodology
These are some important definitions when defining how we work, some of them come from Scrum, others from other methodologies.
Objectives of Pandora FMS work methodology
The objectives involve not only the development members, but also QA, the documentation team and part of the marketing team:
Maximum visualization: The entire team must see the same information, and it must flow from bottom to top and from top to bottom. By sharing objectives we will be able to do a more effective job.
What is not seen does not exist, which implies that all information relevant to the project must be reflected in the management, implemented with Gitlab. What is not seen does not exist, and what does not exist will not be taken into account for any purpose. Strictly following this methodology will allow everyone to be very aware of the planning:
-Strict deadline compliance.
-Advance planning without last minute modifications.
-Clearer information and in due time.
-Elimination of work peaks and etc.
Integrity,, with an increasingly large and complex project, it is imperative to keep integrity during development. All code must follow standards..
Ticket
The ticket is the minimum work unit. There is a single person responsible for its completion and it is planned to be carried out in a milestone (version release).
A ticket is the way in which the development work is broken down, so a big feature will be made up by different tickets, on which ideally several people can work.
The ticket must contain a functional or description of the requirements, which can include diagrams, specifications, interface diagrams (mockup), test sets, examples, etc. In some cases it may even contain the analysis and design of the whole solution.
A completed ticket must perform as specified in the functional document (ticket) and the changes that have been made to these specifications must be reflected in the ticket.
The functional is key so that QA can validate a ticket or not. QA will have to reopen a ticket if it does not meet any of the functional aspects.
Members and working groups
Product Owner (PO)
The PO defines where Pandora FMS has to go, in contact with customers, support and
the “real” market situation, providing technical and functional guidelines but without getting involved in development as such.
Product Committee
Group of people who will meet permanently with the PO to agree where the product is going to, trying to ensure that all PO decisions are collegiate. It is made up of the leader of each Development, QA, Support, Projects and Documentation team.
Development Manager (DM)
The DM will manage the entire development cycle: define milestones, priorities, manage
individually all members and make operational decisions. The DM reports exclusively to PO and is the leader of the development team.
Development Team
They are in charge of the development of large features and product improvements, complete code refactoring, change development (small features), bug fixes and product maintenance improvements.
QA Team
They verify that each development atomic unit works as defined in the
specifications. They will also create and maintain an ecosystem of automated testing for both backend and user experience.
Support Team
They are the ones who deal directly with the client solving issues. Their experience with the product’s day-to-day means that their opinions must be taken into account, that is why they are part of the product committee.
Project team
They implement it on the end customer and are the ones closest to the customer, since they are often there before the project exists, and they usually offer ideas and all kinds of features in hand, for all purposes they are the “speaker” of the commercial department, therefore they are part of the product committee.
Training and Documentation Team
Responsible for training and the product’s documentation. They coordinate with the marketing team and the translation team.
Remote working
All team members (development, QA, documentation) telework freely. In fact, developers from Europe, Asia and America participate in Pandora FMS, and within Spain they are distributed throughout the national territory. We are a 100% distributed and decentralized company, although with traditional hierarchies.
In order to telework, we need each member to take responsibility for their work, be autonomous and commit to planning. Teleworking entails minimizing the need for oral communication and physical personal meeting, replacing them not with teleconferences, but with a precise use of the tools of the development process.
Development watch-keeping
A developer on the team is especially devoted to solving incidences involving code, in permanent connection with the support team (from 8 am to 8 pm, CEST). This allows not only to have maximum agility when solving a problem on a client, but also code changes are integrated into the code repository in an organized way.
Ticket creation and classification process
Any member of the company (including salespeople) can create a ticket in GitLab. This includes customers and partners, although in their case there is a prior filter by the support team and the sales team respectively.
The more detailed the ticket, the more unequivocal the development will be. Add images, gifs, animations and all the necessary clarifications. As well as the way to access the environment where the problem has been found or the contact persons. A developer will never contact a customer directly. If there is the need to interact with them, it will be done through the support or project team.
Nobody, except for the DM or PO, can change a ticket milestone. On creation, the ticket will not have an assigned milestone or assigned user. The task of defining which release a ticket belongs to is the responsibility of PO and DM exclusively.
When a ticket is finished and the developer thinks it should be reviewed by a colleague, they mention it in the merge request through @xxxxx. The review must be nominal. This review is independent of the code review carried out by the department manager.
General ticket workflow
The ticket is assigned to a programmer by the DM. If it does not have a ticket assigned, the ticket will be auto-assigned. (See below the terms that regulate this system).
The developer must understand/solve any questions that may arise after reading the functional document, if necessary, check with the DM or the author of the ticket. This must be done before starting to develop. Once read, you must, in order:
Evaluate (by assigning labels) its complexity and size, reaching a prior consensus with the DM.
Develop the feature following the ticket specifications
Document everything developed in the same ticket or, if required, in a new documentation ticket. This ticket must relate to the “parent” ticket by ticket #ID.
The developer must test its functionality at least in:
-standard docker development environment
-docker development environment with data.
When it is deemed complete, it will be tagged ~ QA Pending and placed in the hands of QA.
For each FEATURE ticket, there will be a reference person, generally from projects, support or even the PO itself. This person will be the one who will define part of the functional (together with the DM and PO), but above all, this person will be the reference person for the developer to ask any details during development, and most importantly, should see the development progress, step by step, so that it is validated.
Any change to the functional will be reflected by the reference person in the ticket as comments, without altering the original functional.
If there is a child documentation ticket, QA will validate the ticket using the documentation generated by the reference person, NOT by the functional of the ticket, validating the documentation and the feature at the same time.
Release planning
When creating a ticket, the milestone must be empty (not assigned) like the user. The only ones that can classify a ticket are: DM and PO.
A series of milestones have been defined to support the ticket classification process, some of them, those dated (releases), can be seen as milestones, while the rest should be seen as simple ticket containers.
(Not allocated): It is the absence of milestones in a ticket. For all intents and purposes, this ticket “does not exist yet.” The DM and PO will validate each and every one of these tickets to see if they make sense in the product roadmap. No developer should take any of these tickets.
Feature backlog: Tickets that will be made at some indeterminate time in the future that sooner or later will have to be addressed. No developer should take any of these tickets.
Low priority bugs: Reported bugs with no priority assigned yet by PO/DM. No developer should take any of these tickets.
STAGE: Tickets proposed by each department for planning in a product release. At each planning meeting, these tickets will be discussed, and moved to other milestones. At the end of the cycle start meeting, this milestone should be empty. The DM is the one who has the final decision as to which STAGE tickets are assigned to a certain release and which are not, relying on the product committee if necessary. No developer should take any of these tickets.
XXX: Release XXX. Milestone that groups a series of tickets that will be released on a certain date. A milestone has a deadline associated with it. In the case of RRR releases, this date could change, in the case of LTS not.
The development of the tickets associated with a release must be finished 5 days before the scheduled day for the release. Tickets not completed before that date will be delayed to the next release and the delay will have to be justified to the DM.
There are two types of release milestones:
-LTS: in April and November. They are 6 months apart.
-Regular Releases (RRR): There will be 2 to 4 regular releases between LTS releases.
A developer with no assigned tasks for a release, as long as there are no pending assignment tickets in the release milestones for the developer’s team, can take one of the unassigned tickets from:
-The closest release, based on date.
-Second closest release, based on date.
CICD
Pandora FMS developers integrate the code of their branches in a central repository several times a day, causing a series of automatic tests to be executed whose objective is to detect faults as soon as possible and improve the quality of the product.
These tests run dynamically in a series of executors or “runners”, some of them specific, for certain architectures (e.g., ARM), that execute static code analyzers, unit tests, and activate containers to carry out integration tests in a real installation of the application.
The generation of Pandora FMS packages is completely automated. Packages are generated every night from the development branch for manual testing. They can also be generated on demand by any developer or member of the QA or support teams, from any branch through the GitLab web interface.
When a release is made from the stable branch, in addition to package generation, a series of steps are executed that deploy them to Ártica’s internal package server, to SourceForge, to Ártica’s customer support environment, and that, likewise, update the Debian, SUSE and CentOS repositories along with the official Docker images.