Goldilocks CTI: Building a Program That’s Just Right
It is time to talk about how much time and money we’re wasting in infosec trying to build effective Cyber Threat Intelligence (CTI) programs. Woah, that seems a bit harsh. To be clear, I absolutely believe that CTI plays a critical role in infosec. But I have personally experienced a lot of misaligned efforts and poorly planned CTI projects that have wasted time and money while driving away great analysts.
A poorly implemented CTI program can greatly increase the total costs of the infosec program and reduce the organization’s effectiveness at addressing threats. I have seen analysts write endless products about vulnerabilities in products that their enterprises don’t operate, for threat activity that has a low likelihood of targeting their networks, and for any random topic so they can produce for the sake of producing. Additionally, I previously discussed how one client had an 85% false-positive rate for indicator-based alerts and they were only able to analyze 15% of the generated alerts because they failed to define a Collection Management Framework (CMF) and they opted to turn on all of the threat feeds their TIP vendor provided. These are all costly mistakes that are repeated far too often.
You can read more about my threat feed experiences here.
I have spent the last ten-plus years building and maturing CTI programs for government clients. I have been on the production side of intelligence, writing reports, collaborating with stakeholders, and publishing threat reports. I have also served on internal CTI programs supporting SOCs as a CTI Lead, a Threat Hunter, and an Incident Response (IR) lead. In that time, I have had to walk back some crazy ideas (usually not mine… usually), implement new capabilities from scratch, and measure the effectiveness of the CTI programs’ activities. In the below article, I provide my thoughts and recommendations on the appropriate team size, capabilities, and outputs for Defender CTI Programs (defined further below). I know I still have a lot to learn about what works and what doesn’t, but I hope these ramblings may help an organization or two dodge the pitfalls I’ve encountered along my path.
Side Note: these ideas have rattled around in my head for a while and a recent exchange on the cursed bird app finally nudged me enough to write this down. Thanks CyberSecStu for the kick in the rear to write this and the permission to share the thread.
Vendor or Defender?
We’ve gone on for too long without addressing the elephant in the room: there is a huge difference between vendor CTI programs and defender CTI programs. We all have our roles to play to keep our organizations safe. By understanding our roles, we can better serve the business operations we are committed to protecting.
Before I dive into the discussion too far, let’s get some loose definitions settled first. These are my definitions for the purpose of this discussion… They likely require some discussion and tweaking.
Vendor CTI Programs — organizations that produce intelligence based on their commercial offerings or legal mandates (e.g. Mil/Gov agencies). This category includes CISA, FBI, NSA, ISACs, ISAOs, and any commercial companies that sell threat intelligence. It also includes security vendors that publish threat intelligence publicly based on their unique visibility into customer space (e.g. Microsoft, Talos, etc.).
Defender CTI Programs — any CTI program that exists primarily for the defense of its own enterprise. These are often referred to as “internal CTI teams” too. They may share publicly through their blogs or through sharing groups, but that’s a function of their program; not a primary service offering.
CTI Capabilities and Services
There are many areas of analysis within CTI that provide unique insights into threat actor intentions and capabilities. They include analytical approaches to victimology and geopolitical analysis to understand threat actor intentions, as well as technical analytical processes to understand threat actor capabilities, such as network infrastructure analysis, malware analysis, mapping TTPs, etc.
We see some of these capabilities when reading vendor reporting and it is easy to think that vendor CTI programs have the coolest visibility. For example, I remember a threat briefing where the presenter showed screenshots of the files on an adversary C2 server. This means the security researcher remotely accessed the adversary’s system — or a compromised system the adversary controlled — to view their files and collect intelligence. We can debate the legality of a private-sector entity conducting remote collection later, but I was sitting in a government organization that didn’t have the authority to conduct that type of collection, so I was definitely jealous of the security researcher's capabilities. Does that mean my own CTI program should seek that authority? Did we need that capability to run an effective CTI program? I don’t think so.
PRO TIP: Don’t hire your CTI leader solely based on the fact they worked at the NSA or FBI… their experiences are likely completely opposite of your enterprise defense objectives.
Whether it’s truly defined or not, there is an infosec community and a smaller CTI community within it that does share threat intelligence across organizations. It is far more cost-efficient for me to leverage relationships with a partner that has the capabilities that I want than it is for me to reproduce those same capabilities within my organization on the off chance that it will add benefit to my program. So before building a capability internally, whether you are a Vendor or a Defender, consider if you can get access to that capability cheaper by leveraging existing relationships or paying a vendor for that service.
PRO TIP: Before building an internal capability, consider leveraging existing relationships or paying a vendor for the service. It might be cheaper in the long run while allowing you to prioritize your resources elsewhere.
Before I deep dive into specific capabilities, let’s run a quick list of some of the services/products/analyses that a CTI program might address, depending on the organization’s structure and line of business. I am sure I’m missing quite a few…
Adversary infrastructure analysis
Attribution analysis
Dark Web tracking
Indicator analysis, including enrichment, pivoting, and correlating to historical reporting
Intelligence production (e.g. writing intelligence reports)
Intelligence sharing (external to the organization)
Malware analysis & reverse engineering malware
Threat hunting (FINDING BADNESS WITHIN INTERNAL DATASETS)
Threat research (Finding/correlating badness with external datasets)
Tracking threat actors’ intentions and capabilities
Vulnerabilities research
I won’t get into all of these in the below discussion, but it is important to understand what each of these processes looks like, even if your organization doesn’t execute them internally. For example, as a Defender CTI analyst, I may never deep-dive into whois and pDNS information to track threat actor infrastructure, identify previously discovered C2 servers, etc., but I must understand how my Vendor CTI analysts conduct this analysis and how their analysis is incorporated into the threat reporting that I ingest within my team.
Side note:Joe Slowik wrote two amazing resources on infrastructure analysis and pivoting. The first is linked here and the second is a PDF link here. I highly recommend reading both of them!
Threat Actor Tracking & Attribution
Okay, let’s address attribution first because why not piss off half of the readers (both of you! Thanks for sticking with me!). I do NOT believe that Defender CTI Programs need to track attribution beyond what is provided through vendor reporting. If activity is observed in your network and it corresponds to the IOCs, TTPs, and targeting mentioned in an attributed vendor report, it’s probably safe to assess that it is the same actor. When reporting that to your stakeholders, you can probably leave it at the country level for attribution.
Let’s be honest, does it do any defenders any good to know if the Canadian military or the Canadian intelligence services are targeting your organization? Does that change how your SOC responds to events? Can you put a unit ID in a signature to match network traffic? Can your organization realistically impose costs on a nation-state’s military or intelligence service? If the cost-effective answer is “no” to those questions, true attribution to the human operator or the organizational level probably doesn’t matter for your organization. You’re not going to offramp these actors from their nefarious ways, so knowing their names and having pictures of their dogs probably isn’t of real value to your organization. Don’t spend the time and money to assess attribution beyond correlating it to vendor reporting.
PRO TIP: When briefing non-technical audiences, stick with the name of the country while discussing the threat actor. You can drop a single line that the actor is tracked as Group 123 and associated with Canadian Intelligence, but constantly referring to the group name is just a distraction for your message. The Canadians are trying to steal your intellectual property. FOCUS ON THAT.
But why isn’t it worth it for defenders to attempt attribution themselves? To effectively attribute specific activities in your environment to known threat actors, your organization will need to spend a significant amount of time analyzing historical threat actor activity, studying the geopolitical and military histories of multiple nations (at least the big four adversaries — Russia, Iran, China, and North Korea), and cataloging the specific TTPs each actor uses in their malware and how they manage their network infrastructure. You will need to define, document, and train the processes to conduct all of that analysis. You will need a knowledge management system, like a Threat Intelligence Platform (TIP), and access to multiple paid services (whois, pDNS, VirusTotal, etc.). The amount of time and money it takes to build the knowledge base necessary to definitively attribute activity to actors is significant. Save your time, save your money, and leverage your vendor CTI programs for assistance with attribution analysis.
That does not mean Defender programs shouldn’t track threat actors over time. It is important to understand which actors historically target your organization, why they likely see you as a target, and what capabilities they have deployed against your network. Defender analysts can focus their efforts on processing incident response reporting from their own SOC and correlating the observed IOCs and TTPs against vendor reporting in the TIP. Much of this correlation can be automated with the right technical solution, which allows analysts to focus on building out their understanding of the activities. By leaning on vendor reporting to provide the attribution, defenders can focus on identifying if they have gaps in their collection (network visibility) and detection (sensor placement, accurate signatures, etc.) to identify threat activities within the network and work to inform internal stakeholders about the organization’s threat landscape.
Focus on attribution-correlation rather than being an original source of attribution.
Malware Analysis Team?
By now… this may not surprise you: I don’t think you need a malware analysis team and you probably don’t need a dedicated malware analyst in your Incident Response (IR) team, either. Again, speaking to defenders here.
Much like conducting attribution, in-depth malware analysis can be a serious resource drain for defenders. A better approach is to ensure that the IR team is capable of conducting basic malware triage (identify file name/size/path/hash, run strings command, etc.) and that they have access to a malware sandbox. If you’re not tracking malware long-term to support attribution analysis, why are you spending time and money reversing malware in the first place? Detection engineering? You can get great detection rules for free (Sigma, Snort, Yara, etc.) and you’re probably paying premium rates for premium signatures to your current vendors. Plus, your AV, EDR, and sandbox vendors will all gladly take a sample of malware if you really want it fully analyzed. If they didn’t alert on the malware, a stern “Why didn’t you detect this?” in the email subject should get you new signatures for their appliance in a day or two at most.
Focus your team’s time on understanding what the malware was trying to do and confirming it if was successful or not. Did the baddies steal your intellectual property? Did they get legitimate credentials? That’s far more important than which obfuscation techniques the malware used to bypass your AV. You focus on your network defense and your AV vendor can focus on fixing their detections (with a little encouragement from your purchasing authority).
To the Dark Web! Or Not?
When discussing most capabilities, I’ll tell you it depends on your intelligence requirements — what are the business needs that you’re trying to fulfill, and will the capability cover that need? For 99% of defender CTI programs, I’m going to say this bluntly:
YOU DO NOT NEED AN INTERNAL DARK WEB CAPABILITY.
Okay, you may need to pay for dark web monitoring services from a vendor, depending on the type of organization you are defending. Are you a considerably large company? Do you operate near/in geopolitical markets like defense, energy, or finance? Are you part of financial services infrastructure like making or issuing credit cards? These are a few of the use cases where it might make sense to pay for dark web monitoring services.
Vendors can monitor forums for any discussions around your brand, technology, or customers and alert you to possible targeting or compromises. Fair warning, I have received quite a few “dark web tippers” from companies trying to sell their services to my clients that were about as specific as “a threat actor is talking about targeting an organization similar to yours.” I have also seen very specific tippers to clients that included the TTPs threat actors used to bypass our client’s fraud detection systems. Experiences may vary, shop around before committing to a long-term relationship.
Why wouldn’t you build an in-house dark web capability? There are many, many, many forums to track and monitor that discuss hacking, selling access to compromised systems/accounts/credit cards, and offering hacking services for hire. To make it harder to monitor, these forums can use a lot of slang in multiple languages. It's not as easy as writing a web bot that alerts on your company name. You will need dedicated analysts, linguists, and online personas. Effective dark web programs essentially build a fully capable human intelligence program. As cool as that sounds to some of you weirdos, it is a very dark business (no pun at all here) and quite costly for the average defender origination to gamble on potentially, someday, maybe catching a discussion about targeting your network. Definitely outsource this capability to a vendor if you really need dark web monitoring.
Indicator Platform or Analyst Platform?
I can say a lot about threat intelligence platforms (TIPs). No seriously, I’ve already written about the topic.
The first thing to consider before deploying a TIP:
Do you want your team to track threat actors' intentions and capabilities over the long term? Or…
Is it good enough that your team simply tracks IOCs and generates alerts when traffic matches known IOCs?
The majority of the vendor TIPs that I have used were built primarily for IOC wrangling and they don’t really help with analysis, correlating activity and reporting over time, or managing threat intelligence. With only a few exceptions, most TIPs are expensive threat feed (read: JUST IOCs) integrators and they do that poorly too.
PRO TIP: You can save your money on a TIP and use the lookup tables in your SIEM to integrate IOCs until your CTI program matures enough to require an analyst platform. Build the use case, then buy the capability. Not the other way around.
If you want to track threat actors and deliver a real analytical capability for your team, here are the requirements I look for in an analyst platform:
Separate object types for reports, actors, malware, IOCs, CVEs, TTPs
True deduplication of each object type (many vendors do some magic handwaving about deduping, but they have one record per source, so the same IOC is actually in your system multiple times despite what they say and how they display it… major impacts for integrating into a SIEM)
Automatically correlate threat actor reporting to my Actors, Malware, CVEs, signatures, and IOCs based on the context in the threat reporting
Let me override anything and everything the TIP automatically did, including correlations, scoring, etc. AND NOT JUST DURING UPLOAD. AT ANY TIME I SHOULD BE ABLE TO MODIFY ANY OBJECT AND ANY ATTRIBUTE.
Standard enrichment only requires plugging in an API key from the appropriate vendor. SERIOUSLY, I SHOULDN’T NEED SOAR TO ENRICH IOCS WITH VIRUSTOTAL AND WHOIS
Truly manage signatures (writing, testing, correlating to threat reporting/actors/CVEs/etc.), including sensor integration
Visualization support for mapping out campaigns, object relations, etc.
PRO TIP: If your TIP vendor’s answer to every requirement is “use tags”, RUN. Tags can be useful when used judiciously. They are an absolute nightmare when they’re used to tag info with the CVEs, malware name, and TTPs because your vendor doesn’t have objects for those items. TAGS ARE NOT THE SOLUTION TO 99% OF YOUR TECHNICAL REQUIREMENTS.
If it seems like this is a hot topic for me, then you’re right. I’ve been a part of multiple TIP projects for clients, which included hands-on testing for at least 10 platforms, and I’ve used five different TIPs in production. I have been dupped, frustrated, and outright angry at how poorly some of them performed. I’ve also been pleasantly surprised by one vendor and one open-source project. These two are the only projects I’ve seen where it seems like actual analysts use their platforms for real analysis. I’m happy to discuss TIPs in a DM if you are seriously considering which TIP to leverage or if you just want to rant about your own frustrations with the “solutions” you’ve deployed.
Defender CTI Programs For The Win
By now, you can probably tell that I think that Defenders should offload a lot of the long-term tracking and analysis to vendors rather than spending the time and money to try and replicate their capabilities internally. We spend a lot of money on AV, EDR, IDS/IPS, and SIEM solutions. Let’s hold those vendors accountable for the services they claim to provide. Those companies have amazing visibility across the global threat landscape and most of them have great analytical teams producing threat intelligence reporting, signatures, and IOCs. Let them spend the time and money tracking threat actors over the long term.
Focus your defender program on understanding your vendors’ capabilities, your organization’s unique place in the world, and the types of threat actors that are likely to target your organization. Only an internal CTI team has unique visibility into the attacks your network sees, the vulnerabilities within your enterprise, and your ability to respond to threats. Make that the focus of your internal CTI program and ignore the distractions. If you manage your CTI resources properly, they can drive down the risks to your enterprise while potentially driving down the costs to defend it.
So what does a Defender CTI Program look like? What do they focus on? Where is their value?
I argue that most organizations only need 4–10 dedicated CTI analysts, depending on the enterprise’s size and complexity. Less than four and you run the risk of burning out your analysts and having very narrow views supporting or opposing your analytical assessments. Diversity of thought is encouraged in intelligence analysis. In my experience, CTI analysts should be aligned with the SOC or the business unit that manages the SOC to ensure they have the necessary access and relationships to effectively integrate threat intelligence into defense. Aligning CTI analysts under a risk, governance, or information assurance-focused team increases the chance that they will focus on likelihood and impact assessments rather than truly understanding threat actor intentions and capabilities. I tend to find these misaligned teams end up focused on risk calculations and provide very little input to the SOC to detect, contain, and eradicate threat actors.
Think of your Defender CTI analysts are your CTI integrators. They take in a vast array of threat data from vendors, open-source sites, and your internal systems to assess the threat landscape of your enterprise. This is why it is critical that they under their sources collection and analysis methodologies (e.g. dark web tracking, malware analysis, infrastructure analysis, and attribution). Your vendor reports and system outputs are the threat team’s sources for collection, and collection is the basis of all analytical capabilities for the team. You can’t analyze what you don’t collect and what you don’t understand.
Defender CTI programs take the data from internal systems and intelligence from vendors to assess:
Who are the threat actors likely to target our organization and why?
What are their capabilities and how can we detect them?
Which vulnerabilities in our environment have been targeted by threat actors?
Which systems and users are critical for our business operations, how are defending them, and have they been targeted in the past?
We’re starting to shape the basic intelligence requirements for our CTI program, which is, unfortunately, a completely separate article rattling around in my head… I will write it eventually.
Hopefully, you can see why recreating vendor capabilities is a giant waste of money for Defender CTI programs and how their time is best spent on integrating threat intelligence within the organization. Internal programs are critical to understanding threat actors and driving down risk with intelligence-driven operations. CTI managers need to appreciate their team’s role in the organization and focus on defending the business rather than chasing illusions of grandeur and recreating unnecessary analytical programs… okay that’s a separate rant that deserves a cold drink or two. Cheers.
Hopefully, this article reduces the ambiguity a bit and it helps at least one organization build the CTI program that is right for them before the Bears come back.