It seems that, in many computing environments, observability is becoming too much of a good thing. A global survey conducted by Dimensional Research on behalf of log data management platform provider Era Software of more than 315 IT managers, cloud application architects, DevOps professionals, and site reliability engineers (SREs) found revealed that 96% saw an explosion in logging data.
The survey also found that 79% of respondents indicated that the overall cost of log observability and management will skyrocket in 2022 if current tools do not evolve.
More than three-quarters of respondents (78%) also noted that attempts to manage log data volumes have had mixed or undesirable results, such as the inability to access the data. Nearly two-thirds (65%) are also evaluating their observability options while the remaining 41% are considering it.
Stela Udovicic, senior vice president of marketing for Era Software, said the survey clearly shows that as organizations rely more on logs to analyze IT events, the cost of storing all that data from newspaper becomes a significant challenge. Overall, the survey reveals that the use of observability tools and platforms has jumped 180% as IT teams struggle to manage increasingly complex IT environments.
On the positive side, more business users are benefiting from log data. 83% of respondents say business stakeholders outside of IT use information from log data, and 96% say log data is used to troubleshoot business issues.
Observability has always been a fundamental tenet of DevOps best practices, but achieving it has always been a challenge. Monitoring tools are designed to consume predefined metrics to identify when a specific platform or application is performing as expected. The metrics tracked typically focus on, for example, resource usage. However, whenever there is a problem, it can still take days, sometimes weeks, to uncover the root cause of a problem through what amounts to a process of elimination.
In contrast, observability combines metrics, logs, and traces (a specialized form of logging) to instrument applications in a way that simplifies troubleshooting without relying solely on a limited set of metrics that have been to monitor a specific process or function. . DevOps teams can use queries to interrogate data in a way that makes it easier to discover the root cause of a problem. Observability platforms correlate events in a way that makes it easier for analytics tools to identify abnormal behavior indicative of a computer problem.
The problem is that IT teams need to find a way to store all the data generated by the logs that they need to analyze. Each IT team can, of course, reduce the amount of log data they keep. However, there is always a concern that in the event of an incident, critical log data may not be available.
Udovicic said most IT organizations historically viewed log data storage as a necessary evil. The main problem faced by most of them is the simple fact that it is not easy to work with log data, she noted.
Regardless of how organizations view log data, the amount of it to manage is only going to increase as DevOps workflows mature. The problem IT teams now face is finding a way to minimize the cost of storing everything.