Virtualization and the Cloud: Round and round your data goes…
This post is also available in : Spanish
Virtualization and Cloud computing are revolutionizing the IT ecosphere and, like all revolutions, there are good and bad consequences and extra responsibility for the supposed beneficiaries. CEOs and CIOs are obligated to take decisions on the fly, in a protean environment where the technological foundation they stand upon changes more often than a teenager getting ready to go out on a Friday night. Burdened by too much information, and acting under pressure, strategic decisions taken in the techno-heat of the techno-moment can create unwanted techno-outcomes as a result of departmental decision-making.
A departmental decision means a decision taken adjacent to the core business, as pilot projects for road-testing new technologies. If you take your eye off one of these balls, the results for your infrastructure can be catastrophic, horrendous and definitely not good at all. Added to this is the wider working of your business: mergers, acquisitions, hirings and firings, restructurings, outsourcing and downsizing, refinancing and rebranding. The end result can be a multi-provider IT environment, with your Cloud supplier, SaaS provider, virtualization dealer, OSs and databases being a motley crew of incompatible head-bangers and princesses.
I’m a consultant for a number of start-ups and medium-size companies, plus a handful of blue chips, and one of my clients, an industrial group, is experiencing this authentic modern horror show. Their infrastructure is totally distributed, with four CPDs, virtualization provided by various suppliers, Cloud-based SaaS…and each system with its own monitoring tool providing oversight of critical functions. What they save on one hand, by employing these technologies, they are losing with the other, in terms of less control and overcomplicated administration.
The question is: where’s the little ball? Round and round and round she goes, where she stops nobody knows…Substitute the little ball for your IT services and/or and you start to get an idea of the problem…If you don’t know where your applications are running or your data is stored how can you expect to be able to respond in case of an internal IT crisis?
- Do you always know where your services are being executed? Or what hardware is supporting which virtual infrastructure? If it’s a single provider, like VMware, to name an example, then obviously you know. If it’s a single CPD, then it’s under control. But what happens when your infrastructure is complex, multi-provider and distributed?
- A long-standing client of mine made the observation that they put a friendly Neckbeard in charge of their systems and networks. This individual was a rock star when it came to networks and system engineering; any question, at any time, and he had the answer. Unfortunately, this rock star was subcontracted, and when the contract expired, well, you can imagine the mess…
And what about the Cloud? I seem to hear you say. There are as many Clouds housed in Korean data centers as there are actual clouds in the skies of Montana, and they all make the same claims: keep your services isolated from those of other clients; better security; basic backup services; redundancy; high availability…But, try asking yourself these questions:
- Are we getting the contractually guaranteed processing power we were promised?
- Are we getting the necessary storage?
- Are your files getting correctly backed up, and will they be available in case of your own systems going down?
- Are the high-availability systems working correctly? Is the provider prepared for any collapse or attack on their systems? Where exactly are those backup systems anyway?
A year from now, new EU regulation, the GDPR, will be in place, placing some serious demands on data protection. Not only that, but also regarding the information an organization is obligated to supply in case of total or temporary loss of data. Another obligation to keep in mind regarding your data for when the regulation comes into effect.
Pandora FMS provides a unified solution, from a single administrative and information point, allowing users to identify inefficiencies, and overexploited or underused resources. The ability to take better decisions, in a nutshell. It allows its users to justify initiating new modernization projects, system integration projects, to make cost comparisons in function of the services they need to run in different IT environments. It minimizes operating costs by consolidating operators. To sum up, the keywords are: location, control and recovery. Know where your data is located, keep control of it and ensure you can recover it in case of need.
About Pandora FMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.
Would you like to know more about what Pandora FMS can offer you? Discover it by entering here: https://pandorafms.com
If you have more than 100 devices to monitor, you can contact us through the following form: https://pandorafms.com/en/contact/
Also, remember that if your monitoring needs are more limited you have at your disposal the OpenSource version of Pandora FMS. Find more information here: https://pandorafms.org
Do not hesitate to send us your queries. The Pandora FMS team will be happy to assist you!
El equipo de redacción de Pandora FMS está formado por un conjunto de escritores y profesionales de las TI con una cosa en común: su pasión por la monitorización de sistemas informáticos.
Pandora FMS’s editorial team is made up of a group of writers and IT professionals with one thing in common: their passion for computer system monitoring.