Chris Calvert of Respond Software (now part of FireEye) outlines the challenges that reduce the efficacy of network security sensors.
We have a serious sensor problem in the cybersecurity world. And it’s bad. Particularly when it comes to network intrusion detection and prevention sensors (IDS/IPS). It seems like many security operations center (SOC) teams have completely given up on them being effective. But is the problem with sensor efficacy, or is it in how these sensors have been architected, managed and applied in the environment?
The answer is that three specific challenges are causing this problem, including:
- The managed security services provider (MSSP) that takes the position that less data is better. Why do they make this case? For one, there is a human bottleneck that naturally occurs when large volumes of data must be analyzed. More data means higher costs and more time required to analyze it, which does not traditionally bode well for an MSSP business model. More fundamentally, people have an upper limit of streaming data analysis they can handle that is well below the upper limit of a machine.
- The converged device phenomenon. It’s easier to include the sensor on a firewall when it’s all one converged device that can be managed together as a single package. However, this is very often not the best place to put a sensor. I call this weaponizing the perimeter, but lateral detection and decrypted monitoring zones are important best practices that are often ignored.
- And finally, compliance is our own worst enemy. Network monitoring is essential, but an organization can still be compliant if it barely works, and my analysis of the industry shows that 70 to 85 percent of sensors deployed are not performing for their owners. While compliance is an important aspect of any cybersecurity program, just being compliant does not equate to a secure environment.
Visibility, Sensitivity and Defensive Utility
There are three aspects to consider when evaluating network sensor grids, including visibility, sensitivity and defensive utility. Scientific, data-driven management of the sensor grid will be able to measure a few key performance characteristics, including the volume of alerts generated and total traffic seen (visibility), the number and diversity of signatures that alarm (sensitivity), and whether or not SOCs recognize and can react to real incidents (defensive utility).
Let’s start with visibility. What do you want to see? Actually, the question should be “who do you want to see?” because servers very rarely click on links without a user’s involvement. Today, most attacks occur due to a user clicking on a link or a malicious insider’s cooperation, especially in the case of ransomware. This is another reason to deploy lateral sensors that are looking for the attacker’s reconnaissance and lateral movement within your network. This actually leads us nicely to the next topic, sensitivity.
Sensitivity in a network sensor is directly related to the number, diversity and effectiveness of signatures enabled on your devices. There are approximately 20,000 signatures, and maybe 2,500 of those are modern and relevant. However, many companies, especially MSSPs, will only enable 100 or fewer. That means MSSP customers are paying for a sensor where 0.5 percent of the total sensitivity or 4 percent of the assumed relevant sensitivity is enabled for active monitoring. MSSPs only do this to reduce the volume to enable human security analysts to manage the data generated by these sensors. These devices exist to alarm on potentially malicious activity, but we have essentially blinded them, significantly reducing their value in the process.
Another factor related to sensitivity in a network sensor is…