The average time to detect and contain a data breach is 280 days, according to IBM’s “2020 Cost of a Data Breach Report.” This makes clear that monitoring IT infrastructure and relying on automated security alerts are not enough to detect sophisticated and well-resourced cyber attacks. This approach is especially lacking since most IT infrastructures have shifted to distributed microservices-based application architectures.
If security teams want to reduce detection and response times, there must be a concerted effort to improve on the log, monitor and react approach to data security. One way to achieve this is to adopt a technology stack that delivers observability.
Security observability isn’t just security monitoring
Security observability has been defined as the measurement of how a system’s internal states can be inferred from its external outputs. For IT administrators, that means determining how well logs and events, generated by hardware and software components, reveal the true state of their IT environment.
Most networks and cloud platforms generate reams of logs and monitor events — such as logins and requests for resources — which are used to generate dashboard data and alerts. But security observability requires more than capturing and processing basic logs and events and reporting that a problem occurred. Metrics, traces and tools are also necessary to produce actionable data that explains why a problem occurred and what resources are at risk — even if it involves interactions across internal networks and cloud services.
Achieve security observability with contextual data
Many security teams already have a SIEM tool in place and may have additional tools that provide AI and deep mining capabilities. This is a great starting point; it may well preclude the need to buy in to products claiming to be “observability-ready.”
There are four main elements infosec teams need to deliver security observability:
- an IT environment built to output the rich contextual telemetry to show its internal state;
- a big data back end capable of simultaneously ingesting large quantities of data while delivering real-time data and responses to queries;
- tools that can turn this telemetry into actionable data; and
- a security team with the time, skills and resources to fully explore and act on all this data.
Obtaining the right telemetry requires a team effort. Application developers, system architects and administrators need to ensure every component and service always records relevant metrics, events, logs and traces, as well as additional metadata about the system’s state when an event occurs.
This broad scope of information is important to prevent blind spots, especially for environments containing a large number of components that dynamically scale up and down to meet demand. For example, administrators need the ability to determine what endpoints are running, what actions can be executed, what data can be handled and, most importantly, what other internal and external endpoints can or cannot communicate with them. Dependency maps are an effective way to record this type of information.
To DIY or not to DIY
It is possible to build a project from scratch that has security observability designed and built in, but it is a big step up from monitoring. Observability requires sufficient data from the entire surface area of the project. This context enables IT security teams to better understand the relationships and dependencies within the project, as well as its performance and health. It also provides security teams with a timeline of events that occurred while fulfilling any request.
On the other hand, retrofitting existing projects with observability may well require a dedicated tool. In these cases, look for a product that can auto discover applications, containers, services, processes and infrastructure, as well as create normal behavior baselines. Security observability tools should also update dynamically as the environment changes.
Determine whether — and how — to invest
There are a few things for IT leaders to keep in mind when assessing the upfront cost of a security observability strategy. Consider how this approach would improve development and operations teams’ efforts to debug issues in complex distributed systems. By helping proactively detect and resolve these issues, observability can benefit the organization’s business reputation and security posture.
All benefits aside, not every enterprise or application needs complete observability. When deciding whether to invest, determine if the organization’s applications meet the following criteria:
- handle large volumes of interactions or transactions that should never fail;
- scale up and down on a regular basis due to changes in load; and
- have rolling updates.
Regardless of industry, most enterprises are now data-driven. That data, and the systems that process it, needs better protection. Observability can improve security as it enables incident responders to react more quickly and more effectively despite the complexity of modern digital enterprises.