We’ve expanded our partnership with Mandiant, now part of Google Cloud, to help our users operationalize and prioritize threat intelligence. READ THE PRESS RELEASE >

How to Measure Threat Hunting ROI

The Problem with Threat Hunting Metrics

Threat hunting is a massive commitment of time, resources, team members, and technology. Any investment that impactful would normally be one that was carefully measured to ensure it was driving sufficient value for the team. The thing is, there’s no established benchmark of “success” in threat hunting. Measuring the ROI of a cyber threat hunting program can be challenging because it often involves qualitative benefits and preventative measures, which are hard to quantify in monetary terms.

No hits can mean that your environment is secure – OR threats are present, but other variables are at play. A lack of hunt findings can indicate that hunters are simply too inexperienced to find the sophisticated threats, they don’t have access to the right information, and a number of other possibilities. On the other hand, identifying many hits confirms that you’ve caught malicious actors in your network…but that can also mean that the network is vulnerable and your security strategy needs to be adjusted.

As a result, security teams struggle to measure ROI – threat hunting just isn’t that black and white. There are other key indicators that threat hunters and their leadership can look to when trying to understand the success of their program, AND of individual hunts.

Building a Strong Threat Hunting Foundation

Threat hunting is qualitative in nature. It’s so reliant on the threat hunter’s experience, expertise, skill level, environmental context, and unique approach that numbers alone may not provide adequate information. Therefore, it’s important to ensure the approach itself is comprehensive before trying to analyze results.
When assessing their threat hunting approach, threat hunters should ask themselves:
  • Do I have the right hypothesis?
    • Is the threat I’m looking for plausible?
  • Do I have the right behaviors?
    • Am I looking for the right things?
    • Am I looking for everything that could possibly indicate the presence of this threat?
  • Do I have the right data?
    • Am I looking in the right places?
    • Am I looking at the right logs, systems, and tools in my tech stack?
    • Are we collecting the necessary data?
    • Have we kept data long enough to find something? Are we cleaning house too frequently?
  • Was I thorough enough in my analysis to confidently say “it’s not here”?

Measuring Threat Hunting Success: Key Metrics

When measuring threat hunting success, there are two key puzzle pieces to investigate: the success of the program as a whole, and the success of each individual hunt.

1. Measuring Success of the Threat Hunting Program

Analysis of the threat hunting program can be separated into two categories: the activity (hunts) and results (hits).

Activity metrics refer to the frequency of threat hunts, and they’re valuable for organizations to understand the amount of time, effort, and resources going into their threat hunting program. The second key component of their program, the result metrics, refer to what the hunters are able to find. Coupled together, teams can contextualize and understand the efficiency of the hunts they’re conducting.

Activity Metrics

Activity metrics refer to the frequency and volume of hunts within a given period. Measuring the program’s activity is important in determining the team’s efficiency.

The frequency of hunts might be limited due to challenges like:

  • They don’t have enough skilled personnel present, or skilled personnel are busy due to resource constraints
  • They have an unfocused security strategy with a reliance on headlines and emergent threats (instead of the ones that are most likely to impact the organization)
  • The organization lacks threat prioritization and doesn’t know where to start
  • Threat research takes a significant amount of time
  • They’re operating with a siloed, disconnected tech stack that blocks hunters from finding the data they need
When analyzing the activity of the threat hunting program, security teams should pay close attention to the types of threats they’re hunting. Most organizations probably have at least some blindspots in their threat detection – those areas are especially key for a comprehensive threat hunting strategy to ensure they’re still being monitored and protected.

Ideally, threat hunting activities complement their detection strategy and fill in the gaps where necessary.

Result Metrics

Threat hunters need to know that the threats they’re hunting are relevant and impactful to the organization. Regardless of how frequently they’re hunting, it’s important that the hunts actually matter and are driving meaningful security value.

Breadth / Relevance of Threats: When measuring threat hunting results, hunters need to know that the threats they’re investigating are the ones that actually matter most to their organization. Just because they’re hunting many different threats, that doesn’t mean they’re being thorough – at least, not in the right way. They need to measure the breadth of threats hunted against their threat profile (or another tool used to identify the threats that are most relevant and impactful to their organization).

When assessing the breadth and relevancy of threats at hand, hunters should ask themselves:

  • Am I hunting the threats that matter?
  • How much of my threat profile have I hunted?
  • Do we have the data? Can we hunt for these threats?
    • Do we know the behaviors and indicators?
Detection Efficacy / Validation: Threat hunting is the investigation into threats that have evaded other existing security measures, such as SIEM, EDR, and XDR. A byproduct of threat hunting is that the hits generated can help teams understand how effective those existing security measures actually are. If the organization’s hunters frequently discover malicious activity that slipped past defensive measures, that could be an indication that their detection methods are not up to par.

Therefore, when analyzing the success of the threat hunting program, teams must understand how well their detection methods and alerts are really working. Through detection validation, security teams can test and tune their rules as needed to ensure they’ll fire when it matters most. The core question here is, “Of everything we’re hunting, how much are we actually alerting on?” In some circumstances, this can actually help take some of the work off the hunter’s plate; the more confident they are in their detections, the less they’ll need to include in their hunt program.

There’s still a great deal of nuance surrounding the definition of “success”. Teams don’t necessarily want to alert on every single thing that they’re hunting, because some activities will produce noisy alerts. For example, threat hunters might be interested in hunting an actor who’s falsifying administrator access. That doesn’t mean they should be alerting every single time an administrator logs in – some of those are legitimate attempts.

Mean-Time-to-Detect: Mean-time-to-detect is a key metric not just for threat hunters, but for the SOC as a whole to understand the effectiveness of their detection strategy. It’s a measurement of how long certain threats were present in the network before the team identified them. A related measurement is dwell time – the amount of time a threat actor is present before the threat is remediated. Though slightly different, they’re both metrics that indicate how long adversaries can go undiscovered – and undisturbed – in an organization’s system.

A high mean-time-to-detect is hard to trace back to one single root cause, but it can be used more specifically in the context of a threat hunt. If threats are lurking in the network longer than the team would like, that may mean they’re not prioritizing threats accurately.
By focusing hunts on the threats that are more likely to impact their organization, security teams can make sure they’re dedicating resources to the right places – and detecting the adversaries they need to worry about more quickly.

Incident Reduction and Cost Avoidance: When the average cost of a security incident is $4.45 million, it’s no question that a key driver behind threat hunting is to prevent incidents (and the financial fallout that accompanies them). If a threat hunt can be linked to preventing a potential incident, teams can determine their savings, and therefore the ROI of their threat hunting program.

Another way to look at this is how many threat hunts were taken off the table. For example, if there’s a new and emerging threat targeting organizations of a certain industry, size, and region, teams fitting that criteria might (wisely) decide to go hunting for it. If they find evidence of malicious activity and mitigate the risk, they can consider that a prevented incident, proving the value and efficacy of their threat hunting program – and maybe even getting some concrete numbers on just how high that value is.

Coverage: “Coverage” might sound like a vague, hard-to-quantify metric in the realm of threat hunting. The threat landscape is so vast, how can threat hunters fully understand the breadth of their coverage? Essentially, threat hunters need to know that their approach is comprehensive through the lens of the threats that are relevant to them.

Whether it’s MITRE ATT&CK, threat modeling, a Threat Profile, or a combination of many measurement methods, every security team should have some tool that allows them to find out how much of their attack surface is covered. From there, it’s really a numbers game – how many MITRE ATT&CK techniques or threats of interest are being addressed by the team’s threat hunting strategy?

2. Measuring Success of Individual Threat Hunt Outcomes

Zooming into the success of the individual threat hunt, organizations can take a more direct approach. Key metrics may differ based on organizational goals and context, but there are a few that are likely to apply across teams and industries.

Hunt Duration: Organizations should measure the time taken to complete each threat hunt. Because there is no industry-wide framework for threat hunting, security teams might not know whether their hunts are efficient.

It’s possible that certain roadblocks are slowing them down, and they’re not even aware of it. At the very least, they can benchmark their speed internally by keeping track of the time taken to complete each hunt.
By comparing the speed of each hunt, organizations can identify instances in which they were more efficient, and what sets those hunts apart from slower ones. From there, they can narrow down where they’re being held back by root causes like technique, tools used, and even team members involved. There are often opportunities for efficiency gains in places that security leaders would never expect to look.

Hit Rate: Perhaps the most obvious metric that hunters should keep track of is the hit rate – the proportion of hunts that identify threats. Less obvious, however, is how they’re supposed to interpret that number.

As previously discussed, a high hit rate is not necessarily worse (or better) than a low hit rate. 100% hits would indicate that the organization had an ineffective threat prevention strategy – and 0% would mean that threat hunters aren’t finding anything at all, which is extremely unlikely to be accurate (unless the organization is completely confident in their detection strategies). So where is the sweet spot?
Like many things in security, it depends. Security teams need to identify a range of hit rates that indicates they’re preventing what they can, and identifying what they can’t in a quick and effective way.

Measuring Threat Hunting Success

Because threat hunting and its effects can be so ambiguous, it’s important to have a nuanced, structured approach to executing and evaluating hunts. The goal is not to strive for perfection in every hunt but to ensure a robust, adaptable, and effective threat hunting program.
With built-in threat prioritization and research capabilities, plus one-click hunt for scalable threat hunting across disconnected tools, SnapAttack automates and centralizes many of the tasks that slow threat hunters down. The platform also serves as a threat hunting hub where hunters can analyze tangible results with ease to more quickly take the appropriate next steps.

Whether you’re building a threat hunting program from the ground up, trying to make your current approach more effective, or just trying to determine where you stand, SnapAttack’s platform and threat hunting maturity assessments can get you to the next level. Get in touch today.

About SnapAttack: SnapAttack is an innovator in proactive, threat-informed security solutions. The SnapAttack platform helps organizations answer their most pressing question: “Are we protected against the threats that matter?”

By rolling threat intelligence, adversary emulation, detection engineering, threat hunting, and purple teaming into a single, easy-to-use product with a no-code interface, SnapAttack enables companies to get more from their tools and more from their teams so they can finally stay ahead of the threat.