The difference between intrusion detection systems (IDS) and intrusion prevention systems (IPS) can be found in the categorization: one system detects intrusions and the other strives to prevent them.
To put it another way, detection allows traffic to flow through a network and triggers an alert when known threats are identified. Prevention, on the other hand, will block that traffic from continuing on its way based on the same characteristics.
Modern IDS or IPS systems combined these features and can operate in either mode. The Bricata next-generation network sensors, for example, enables customers to make that selection as they deem appropriate for their specific security posture.
This often conjures up an important question: if a sensor can identify patterns and detect threats, why don’t more organizations use these sensors to just block the traffic? Why isn’t IPS a more popular or viable cybersecurity solution?
The answer to these questions rests both in traditional concerns – and the evolution in the current threat landscape – which have all but ruled out prevention as an option.
Why isn’t IPS a More Popular Solution?
At one time, IPS was poised to overtake IDS as a leading network security strategy. The deep-packet inspection capability held the promise of both blocking threats and eliminating the deluge of false positive- and trivial true positive alerts. However, the notion of prevention, as appealing as it sounds, never quite caught on for several important reasons.
First, a network sensor by its very nature rests between two points on the network to observe traffic. Typically, this is a place between the client and server. A sensor that is just detecting malicious activity can be placed on a tap, but to block a sensor must be installed inline. Anytime you place a device inline, you are adding network latency as well as introducing another potential point of failure.
Traditional IPS would allow some packets to flow through until it could make a determination and reset the connection. The processing time could take several milliseconds per connection, the aggregate of which slows down network performance.
Additional disruptions also arose from prevention rules written based on threat intelligence. Rules, such as those created with Snort, provide a simple definition that helps specify unique characteristics of network traffic that could be harmful. When those conditions are met, an IPS will block the communication.
The problem was that new rules, while useful for addressing dynamic threats, can have limited testing and can have unintended consequences. Most security organizations find it undesirable to risk running a new blocking rule that could inadvertently interfere with normal business traffic and workflow.
The third and final major objection to blocking threats with IPS is based on the network. If an inline sensor goes down, does the organization also risk taking down the entire network as well? Network redundancy and fail-over must be considered before implementing IPS.
Most organizations today have implemented contingencies to route traffic around network devices just for that purpose. Vendors too have stepped up by offering failover systems. However, in aggregate, the latency, and the potential for network disruptions tempered the market’s appetite for IPS.
Also see these related posts:
The Bricata Solution for Health Care [data sheet]
3 Golden Opportunities to Mitigate Network Cyber Attacks
Bricata Unveils New Network Security Dashboard for Better Cyber Alert
Triage and Threat Hunting
Demands on Computing Power at Odds with Prevention
Threats are not stagnant and become increasingly more sophisticated. Where traditional IPS was able to match a signature in milliseconds, today, some of the older systems can take 50 or 60 milliseconds.
If security is using sandboxing techniques, that can take anywhere from 30-seconds to five minutes to identify a new variant of malware. And that’s without a queue of files waiting for a turn in the sandbox. No organization can hold a connection open for five minutes in order to determine if a binary is bad, and so it’s virtually impossible to use blocking as preferred technique.
This problem is compounded by the fact that some advanced threats require longer-term behavioral analysis to identify them. Security can’t just look at one connection to make a determination. Instead, it must evaluate many connections that may span hours, or even days in order to identify a pattern of behavior.
The end result is consequential in that emerging threats – like zero-day threats – are taking more and more computing power, and longer windows of time to analyze. Blocking decisions must be made in milliseconds to be effective without impacting network performance. This is not sustainable as most organizations do not have the resources to simply keep throwing more computing power at the issue.
3 Lessons from IPS that Drive the need for Threat Hunting
If we cannot block threats from getting into our networks, how should our thinking as a community evolve? What is the framework by which we protect our organizations?
I suggest there are three key principles:
1) Acknowledge that we cannot block all threats.
Acknowledgment is the first step to identifying what’s possible. It’s not possible to block all threats, all the time in cybersecurity, and acknowledging that opens up the possibilities associated with detection. Cybersecurity has become a puzzle of risk management – and foreseeable security strategies of the future rest in detection, not prevention.
2) Develop a comprehensive policy for triage.
If detection is to be our primary focus, then a finely tuned policy for triage becomes pivotal. If a threat is able to obfuscate itself or evade standard detections, then there is also a higher likelihood it has also moved laterally within your organization. Security must operate with the assumption that more of it is out there.
The triage process should incorporate best practices in threat hunting, to understand the context of an incident, and what other devices were potentially affected. Security needs an ability to clearly draw lines around a network incident the way detectives might encircle a crime scene with yellow police tape.
3) The vendor community must simplify the process.
Threat hunting can require specialized skills and be labor intensive. Just like security can’t keep adding computing power, it also can’t just add human resources. It’s imperative that vendors seek out ways to simplify the tools and processes for threat hunting. When an organization initiate triage procedures, threat hunting tools should automatically scope the problem and direct the attention of the investigation to next logical step.
* * *
We believe the market is shaping up in such a way, that most security organizations will be more inclined to used standalone intrusion for rapid detection rather than prevention. It’s a conscious decision that the enterprise makes knowing security operations can hunt for the threats that triggered an alert.
If you enjoyed this post, you might also like:
Study on Fileless Attacks Underscores Risk of Over-Reliance on Endpoint Security