Example blog

LogicMonitor Introduces Unlimited Log Data Retention

LogicMonitor announced today that it will provide an option for IT organizations to maintain an unlimited amount of log data on its Software as a Service (SaaS) platform.

Tej Redkar, product manager at LogicMonitor, said one of the issues holding back progress in observability is the cost of storing log data. LogicMonitor has decided to remove this problem by allowing IT teams to store log data on its LM Logs service for as long as they want, Redkar said.

This data will also still be available as hot storage without the need to wait for the data stored offline to be rehydrated, he noted. Instead, that data is easily accessible alongside the metrics and distributed trace data that LogicMonitor also collects, Redkar said.

In general, LogicMonitor advocates for centralizing observability for the entire IT organization through a SaaS platform with machine learning algorithms that automatically detect real-time anomalies based on millions events captured through log data. The problem many organizations face today is decentralized observability data silos that are more complex to maintain while providing less visibility at a higher total cost, Redkar said.

The LogicMonitor platform, on the other hand, is designed to be accessible to, say, a network operations team as well as a DevOps team, he noted.

Not all organizations need to store log data forever, but Redkar said as IT management continues to evolve, more organizations are moving towards storing metrics, traces and metrics. logs for longer periods. The rise of open source agent software is also making it cheaper for organizations to capture this data in the first place.

It is not clear to what extent other observability platform vendors will provide unlimited data retention, but as the cost of storing data in the cloud continues to drop, it is evident that charging fees storage to IT organizations is becoming untenable. There will always be costs associated with storage, but it is not enough to charge additional fees beyond the base cost of the service. The only thing accomplished by charging for storage separately is to discourage IT teams from storing the data needed both to achieve observability and to better train the machine learning algorithms that reveal anomalies.

Observability, in one form or another, has been a fundamental tenet of DevOps best practices for years. Initially, DevOps teams focused on continuous monitoring as the most effective way to proactively manage application environments. Observability platforms with machine learning algorithms allow events to be correlated so that analysis tools can more easily identify abnormal behavior in real time. With this information, it becomes much easier for IT teams to resolve issues faster.

In fact, there may even come a day when the so-called “war room” meetings that are called to identify the cause of a computer problem through a careful process of elimination are no longer necessary. In the meantime, however, the total number of IT incidents that actually cause disruption is expected to steadily decline, even as IT environments become more complex.

Source link