Here at Pandora FMS blog we like to get up early, prepare a cup of pennyroyal mint and while it settles, do a couple of stretches, wash our face and start the day defining strange words worth something for our readers. Today it’s time for: Telemetry!

Do you already know what telemetry is? Today we will tell you

Shall we get straight to the point?

Straight to the point then it is!

Telemetry, roughly speaking, is what automatically measures, collects and sends data from remote sources, thanks to devices that collect data.

It then transmits that data to a central location where it is analyzed and you can then consider your remote system as supervised and controlled.

Of course telemetry data helps, while controlling security, to improve customer experience and monitor application status, quality and performance.

But let’s go further, what is the true purpose of telemetry?

As can be understood, the collection of telemetry data is essential to manage IT infrastructures.

Data is used to monitor system performance and keep actionable information on hand.

How do we measure telemetry?


Through monitoring!

Monitoring tools measure all types of telemetry data. 

They start with server performance and head towards actionable infinity.

Some types of telemetry data

It all starts with a small signal that indicates whether a server is active or inactive.

Then it tends to get complicated. 

Event and metric data already includes the CPU utilization of a server, including peaks and averages over different periods. 

For example, a type of telemetry data to be monitored includes server memory utilization and I/O loading over time.

*This data is particularly important when using server virtualization.

In these situations, statistics provided by virtual servers may not reveal problems with CPU or memory utilization; instead, the underlying physical server may be underutilized in terms of physical memory, virtualization, CPU, and I/O connectivity with peripherals.

Finally, user requests over time and concurrent user activity on standard deviation charts should be included in server-specific metrics.

This will reveal how your systems are being used in general, as well as information about server performance.

Telemetry Data Monitoring

Now that we’ve taken a look at servers and their telemetry, let’s dig a little deeper into some of the fundamental components of their physical application.

This includes:

  • Network infrastructure.
  • Storage infrastructure.
  • Capacity.
  • Overall bandwidth consumption.

As any experienced IT guy can warn you:

Quantifying network monitoring beyond the strictly commonplace is important.

Measuring network traffic in bits per second across LANs and sub-LANs within your application infrastructure should always be part of monitoring network utilization.

To predict when packets will be lost and when storms may take place in your network, it is essential to understand the theoretical and practical limits of these segments.

The utilization of the segment’s bandwidth over time in multiple network areas must be revealed by network monitoring.

Monitoring certain network protocols will also provide a more detailed view of application usage in real time and, perhaps, of performance issues for certain features.

Likewise, monitoring requests to certain network ports can also reveal any security gaps, as well as routing and switching delays in the relevant network components.

In addition to monitoring raw network usage, it is necessary to monitor the storage systems connected to the network.

To show storage usage, waiting times, and likely disk failures, specific telemetry is required.

Again, it is important to monitor both overuse and underuse of storage resources.

Some basic application telemetry monitoring data

It is very important to monitor the telemetry that can involve access to the database and its processing, monitor the number of open database connections, which can be triggered and affect performance.

Tracking over time allows you to spot design decisions that don’t change as application usage grows.

It is equally crucial to control the number of queries to the database, their response times, and the amount of information circulating between the database and applications.

Outliers and averages should also be taken into account.

Uncommon latency can be concealed or hidden if only averages are controlled, but these outliers could still have a negative impact and irritate users.

Your monitoring strategy should always take into account tool exceptions, database errors or warnings, application server logs looking for unusual activity…

And that’s just the beginning!

Your monitoring software

Having a solid monitoring strategy is crucial, but so is having a well-thought-out reaction strategy that incorporates:

  • Determining, understanding and initiating root cause analysis.
  • A written communication strategy that includes the names and contact details of those responsible.
  • Identifying easy solutions to restore the program in the short term.
  • A research strategy to prevent future problems.

Telemetry Monitoring Elements

Some telemetry monitoring elements that you may use:

  • Dashboards or other real-time system information and telemetry tools.
  • Technologies for analyzing records safe for use with production systems.
  • Business intelligence to retrieve data from records, such as usage trends or security issues during specific time periods.
  • Tools that automate risk detection, recovery, and mitigation to get rid of manual labor.

Using a centralized system and working with a software vendor, you may set in place a robust monitoring strategy that will be developed over time and become more comprehensive.

And there, my friend, is where we come in!

Want to know more about Pandora FMS?

The total monitoring solution for full observability

Contact our sales team, ask for a quote or solve all of your doubts about our licenses.