An Analyst’s Need for a Threat Intelligence Platform

My lessons learned and recommendations from multiple threat feed and Threat Intelligence Platform (TIP) assessments. PART II

A lot of organizations are rushing out to get Threat Intelligence Platforms (TIPs) for their analysts- and rightfully so. The commercial market has done well on selling us the idea that a “TIP” is the solution to enhancing and improving our analytical capabilities. However, organizations that buy into this mantra without first taking the time to define well thought-out requirements may see analysts reviewing and researching the same information in separate silos-of-excellence while seeing no real improvement in program efficiency. In fact, a poorly executed intelligence program usually leads to frustrated analysts, a drastic increase in false-positive detections, and requests for additional budgeting to fill capability gaps. However, when a requirements-driven solution is put into place that brings together knowledge from the entirety of cybersecurity operations (security operations, incident response, vulnerability management, etc.), an analyst can process information more effectively and deliver more valuable intelligence products to the entire organization.

For a better understanding of threat intelligence’s possible integrations with business units, check out this white paper “Applying the Threat Intelligence Maturity Model to Your Organization”. (*Disclaimer in Resources section*)

Keep in mind that defining requirements should not be confused with “which tool do we need” or “how can we integrate the features of ____ solution into our processes.” While these questions are most certainly important, first focusing on the solution, rather than the problem, will almost certainly lead to frustration and wasted money in the long run. Organizations fail when they build requirements or processes around solutions instead of solutions around requirements or processes.

PRO TIP: Do not build your requirements while sitting through vendor demos! DO request from your potential vendors, use-cases and requirements their product fills — do this after you have already defined your use-cases and requirements. If your vendor can’t answer these simple questions, you should strongly consider where you are investing your resources. You should be looking for a solution to problems you have already identified in your operations. You are NOT buying a solution to problems that the vendor highlights in their sales pitch.

Come on, it can’t be that bad right? A few years ago, I used Firefox’s tab manager capability to auto-open 20–25 URLs and then I would run IOCs through at least half of those tabs for correlation, enrichment, and pivoting. My teammates and I would copy/paste each IOC into our preferred websites and then copy/paste those results into a Python based tool to write products. It was tedious and time consuming and an individual analyst averaged a meager 84 IOCs a month — in a world where thousands are shared every day. In fairness to the team, we had strict research, writing, and publication standards that required extra due-diligence before sharing; these restrictions most certainly contributed to “low output”. It is safe to say that the limitations we encountered, many would not see in their own organizations. The thing to consider here was that the proper technology could help us automate the mundane while enabling us to focus on our analysis and strict sharing requirements.

While in the same role as above, I had the unique opportunity to collaborate with roughly 300 companies and federal agencies. Through many meetings and a few happy hours, I learned that many of our struggles were the same across the teams. Turns out that we were all copy/pasting the same reports into browsers, tracking notes in Notepad or Notepad++ (for the advanced analysts!), and then dumping the notes onto a share drive somewhere to be forgotten within a few short days. The real ninja shops were using OneNote or MS Access to develop a TIP in-house — sometimes even using a combination of both applications! Some organizations I worked with also leveraged Python scripts to extract observables from reporting to pump and dump them into their threat repositories. I am not a fan of this method as most of these Python scripts fail to scrape context around the IOCs- rendering the observables far less valuable than they should be. This point leads us to the first part of the discussion:

Do I need an analysis platform or an indicator aggregator?

From this evolution, we know that analysts need a system that helps them process threat reporting and IOCs, correlate and enrich the data across multiple sources, and maintain the history of this threat data in a single system. So why is that most of the TIP demos we see focus primarily on threat feed integration and information sharing? This is not the use-cases we provided. These “solutions”, in my opinion, are not aligned with the daily requirements that analysts face. Here’s how I see it: there are two primary use-cases that organizations consider when purchasing a platform:

  1. An analyst platform that helps analysts manage their shared knowledge about threat actors, malware families, CVEs, IOCs, TTPs, etc.
  2. An indicator hopper that is primarily used to ingest large amounts of feeds, filter them, and push them into the organization’s security stack (SIEM, IDS, etc.).

For my thoughts on threat feeds, check out my article titled “Considerations for Leveraging Cyber Threat Feeds Effectively” and the work we’re doing over at the Center for Cyber Intelligence.

[embed]https://medium.com/@andy.c.piazza/considerations-for-leveraging-cyber-threat-feeds-effectively-1d1cfa9fb140[/embed]

These two approaches to Cyber Threat Intelligence (CTI) management are quite different and they include conflicting requirements. If you are at these crossroads currently, I recommend that you focus your effort and resources on implementing an effective analyst solution rather than a fancy threat feed aggregator. Let’s consider the fact that most of your security control vendors (e.g. IDS, IPS, mail gateway, etc.) are already consuming these IOCs and often offer the ability to alert or block on their IOC integration efforts. Right here, your organization could make the determination that accepting the risk that the vendor that built your security perimeter can also be trusted with signature development and IOC processing. This decision is obviously a hard sell- but balance that with the fact that this decision frees up your resources to focus on targeted analysis and a level of intelligence maturity that your analysts cannot provide if they are drowning in IOC steams. Let’s not forget that most vendors work with their clients to reduce false-positives and will respond to your feedback if you are getting too many from their solutions.

Upfront, I will acknowledge that this opinion of mine is biased by my needs as a threat analyst. To date, I have never needed a tool to help me create more noise and alerts in an environment, but I have needed tools that correlate threat data to previous reporting, forensic artifacts, and detection signatures.

How do my problems lead to real solutions?

Let’s talk about formulating requirements. Organizations should consider the commonly requested outputs of their analysts to help with the identification of key data points that a new platform must be able to assist in answering. Common questions threat teams answer include:

  • Have we seen these IOCs and TTPs before?
  • Is there previous reporting on these IOCs?
  • What do we know about Actor X?
  • What do we know about this malware Y?
  • Are we defending against threat X?
  • Are we defended against vulnerability Z?
  • How many signatures do we have deployed to detect and mitigate X, Y, Z
  • What TTPs does Actor X use?
  • Have we seen TTP X used in our environment?
  • What actors should we care about and are they targeting us?
  • What are the threats targeting organizations similar to mine?

In addition to the intelligence questions you are trying to answer, organizations should also identify repetitive analytical tasks that can be automated, saving the business time and money. Ensure you include requirements for your TIP that address these potential automation points. I would stress that you should focus on automating the mundane tasks (e.g. parsing IOCs, linking data, etc.) but ensure that the system does not attempt to automate the analytical functions that your analysts fill. There are a few vendors on the market that have developed systems that make final judgment calls on which IOCs to ingest and which ones are not valuable — this is a huge pitfall in my opinion. I am okay with systems making suggestions — e.g. identifying that an IOC belongs to a major ISP — but the analysts must have the final say over their knowledge base.

Organizations must also consider the priority levels of each requirement. Not all tasks are created equal. For example, many organizations consider system integration with their security stack as their number one priority, but I would argue that integration is secondary to getting a better solution for your analysts. As I often say, having an awesome stereo and sunroof on a car isn’t very cool if the engine is garbage and doesn’t get you down the road. So what do analysts do that sucks and what can be offloaded to automation?

  • Extracting data from threat reports
  • Correlating context around technical indicators
  • Identifying keywords (e.g. actor and malware names) and how they correlate to other keywords of interest (e.g. KILLERTOASTER = REDRABBIT)
  • Correlating data from reports to existing data points, including past reporting, deployed countermeasures, etc.
  • Managing source documents as artifacts (e.g. where did I save that report that provided these IOCs?)
  • Enriching data points with multiple sources (e.g. is this URL malicious in Virus Total and who owns the domain?)
  • Managing artifacts and knowledge objects related to in-house incidents (e.g. analyst notes, PCAP, and compromised accounts/hosts)

In my opinion, a combination of an analyst’s needs and the questions they are regularly tasked with answering should make up the foundation for any TIP requirements discussion. After all, a TIP is supposed to be a solution to threat intelligence problems and should, therefore, focus on analytical problems first. After these requirements are met, organizations may layer additional requirements onto the platform, such as managing signature drafting, review, and deployments; integration with SIEMs and IDS; and external information sharing capabilities.

Do my analysts need an analytical platform or an information sharing platform?

Another distinction worth discussing is the difference between a system intended for internal analysis or a system chosen as an information sharing platform. I strongly urge organizations to have two separate systems for these functions to reduce the risk of inadvertent disclosure. I am a huge believer in the information sharing community and strongly believe in the need for organizations to have an automated way to consume, enrich, and share threat intelligence externally. I highly recommend that organizations participate in information sharing programs, such as their sector-specific Information Sharing and Analysis Center (ISAC) and DHS’ Cyber Information Sharing & Collaboration Program (CISCP).

There are amazing solutions on the market that support these information sharing communities and organizations can download anyone of the free TAXII solutions available to connect with these organizations and go crazy integrating the threat feeds that I recommend against ingesting into their analysis solution. In this configuration, shops can pump in external feeds into their TAXII solution and filter high-quality and high-context feeds into their analyst solution. When analysts identify threat intelligence to share, they can export it from their analyst solution, process it through a human review assessment, and upload it to their information sharing solution. This configuration greatly reduces the risk of inadvertent disclosure while also maximizing the value of the information communities.

So what is the risk really and why am I so against an all-in-one solution? I believe that the risk of accidentally leaking internal incident data is not worth the convenience of automatically sharing threat intelligence externally. This is not to say that I don’t want threat feeds coming into my system automatically. I want and need that capability, but I don’t want nor need my analysis to go outbound without systematic review and approval processing. True story, I once used a custom system that was for internal use only for the first few years that we used it. The decision was made to update the system to support external information sharing. The challenge we faced was threefold:

  1. The system was full of historical incident data.
  2. Analysts were used to putting this data into their system and it would take several months to break those habits.
  3. We no longer had a system where we could confidentially store and correlate internal data with external threat data.

Because of the decision to support external information sharing from our incident analysis system, I found that many analysts simply stopped using the in house built system until it was replaced years later. Consider the capabilities that were lost — or if you’re a decision maker, consider the financial loss in paying for a system that your analysts cannot confidently use to support your organization. I cannot recommend this strongly enough for internal analyst teams- DON’T DO THIS. The decision to share threat intelligence cannot be a simple YES/NO dropdown in the TIP. This is an undue risk.

The evolution of analyst tools led us here?

But let’s get back to the requirements discussion. Now that we have a basic understanding that our intelligence analysts need a system that helps them actually do analysis, what does that look like? Taking a look at the UI for most of the popular TIPs, you would think that analysts want to see link diagrams, some fancy confidence ratings, touchless IOC ingestion, and single IOC views. While many of these are cool, where did these concepts come from when most of us evolved from Notepad -> Excel -> Access ->SQL. Most intelligence analysts have experience working with table views where we can rack and stack data, sort it, filter it, etc. So how did we come from this experience to these new fancy “solutions” where workflow and daily processes are a completely different experience? While analysts do need the ability to perform link analysis and pivoting, this shouldn’t come at the cost of losing our traditional and familiar analytical capabilities.

I think this issue is the result of two things:

1. Misguided Market Research: The market conducted some basic research on traditional intelligence platforms (like Maltego, i2 Analyst Notebook, Palantir, etc.) and tried to build a platform that fit similar capabilities, perhaps without taking into serious consideration the typical workflow of an intelligence analyst. It appears that, even still today, there is intense focus on the “importance” of IOCs and “threat intelligence sharing.”

2. Analyst Complaints: Leadership approaches the team and says “we’re looking at buying a TIP, what are your requirements”. And what do we do? We list out all the things that we wish we could do but that we don’t currently have as a capability. We are so good at assessing what we don’t have (*cough* complaining *cough*) that we forget to include the things we currently have and need to be successful.

This is where having a solid understanding of your day-to-day processes comes in. Organizations must understand the importance of not building processes around technology but rather technology around processes. Teams should walk through their information flows, the questions identified as routine RFIs, and their pain points in their processes. I recommend that analysts and leaders develop their list of requirements and group them as “core requirement” and “enhancement”. This will greatly help both the analyst team and the procurement team identify the best solutions to evaluate.

PRO TIP: Consider rating your requirements using the MoSCoW Method (Must Have; Should Have; Could Have; Won’t Have). This will assist your procurement team in understanding what is absolutely critical vs. what is just a “nice to have.” Because let’s be real, we all have hopes and dreams but there is almost certainly no solution on the market today that will address every single requirement a team comes up with.

Things you probably don’t want in a TIP…

Let’s also have a conversation about what we don’t want nor need in our TIP. I have already stressed the risk of using your internal analyst platform as an information sharing platform- if you’re not convinced, consider connecting your internal ticketing system to an information sharing platform. Yes you should participate information sharing communities and yes you should have a system to help you with those processes. No, it should not be the same system that has incident analysis. Do you want that ticket about passwords in clear-text being pushed out to the world? I’m going to assume that you’re convinced and move on… hopefully you’re convinced… here are a few other capabilities — or design decisions that limit capabilities — that I have found that really frustrate me or we simply do not need with these platforms:

  • The Platform is connected to external communities for outbound information sharing. Okay I beat this up a bit. But if you’re organization is going to enable this, require your vendor to put in a two-step approval process that requires two-person integrity (E.g. one analyst reviews and marks approved then a second analyst does the same).
  • The Platform enables analysts to write products directly in the system. Sorry, I just don’t want or need this functionality. I want to be able to export the IOCs into a STIX and CSV file for sharing- but I want to write in a Word processor. They are easier for formatting, better grammar and spelling checkers, and that is their core function. I don’t need my TIP to be a one-stop shop for everything I do. I’ve said this before: we bought a Ferrari, let’s not put a lift kit on it.
  • The Platform does not allow you to import IOCs because they are benign (e.g. Google DNS) or public infrastructure (e.g. GoDaddy IP space). TIPs must support benign IOCs or your team is losing intelligence about how malware and threat actors operate. Remember IOCs in the TIP do not need to be auto-pushed into the security stack. Mark them as Anonymization (e.g. TOR), Benign (e.g. Google DNS), or Compromised (e.g. business partner sends malware).
  • The Platform does not support importing “collections” of IOCs. Most organizations share IOCs as groups in threat reports- your TIP should keep them packaged together as a group during import. Otherwise, once again, you are losing valuable context about the activity. And no, importing the IOCs, conducting the analysis, and bringing them back together into package/report/bulletin does not make any sense for an analyst workflow. I want to upload them as a unit, correlate them to other units, but keep them tied up in a nice little bow the way the author intended.
  • The Platform does not support context around IOCs. While I love to use tag data in my systems just as much as the next analyst, this capability cannot replace the need for a dedicated description field for your IOCs. For context (small pun intended), an example of what a description field could provide: “On March 25th, 2019, “Help Desk” <badguy(@)badguy(.)net> sent an Invoice themed phishing email with the malicious attachment “openme.doc” [MD5: …]. When executed, “openme.doc” calls out to badguy[.]net/badurl and downloads ransomware.” Imagine having this datapoint enriched in your SIEM’s notable events. This is information is actionable intelligence that can save your analysts time and, by extension, save your organization money. Without this information, analysts end up spending time doing research to understand what the significance of an IOC is (or perhaps aren’t doing so at all).
  • The Platform does not allow for human review and modification of the upload prior to committing the data to the system. There is nothing worse than having multiple imports for the same information. Your TIP should do its best to auto-extract the key data elements from the report (e.g. actor name, malware name, CVEs, IOCs, etc.) and the present the extracted information for review and modification by an analyst before committing changes to the system.
  • The Platform does not allow import of IOCs already in the system. I don’t want duplicate entries for the same IOCs. I like deduplication. But I don’t like not being able to see that a given IOC is in multiple reports because the system rejected it during upload. The TIP should have a one-to-many relationship with the IOCs-to-reports and their context. I want to see everything we know about that MD5- not what was uploaded the first time the system saw it.
  • The Platform does not separate out IOC Type from IOC Role. Data standards matter. There are only two types of IPs in a normal data model: IPv4 and IPv6. I should not have to sort through dozens of options for IOC Type. How the IOC is being used (e.g. Sender Address, Exfil Address, etc.) must be a different field. More to come on this is a future article discussing the Minimum Standards for Cyber Threat Information Sharing.
  • The Platform simply isn’t intuitive. If you are constantly asking “how to do something” in a tool, there is a problem with your supposed “solution”. If the vendor’s response to those questions include the words “work around” or “just use a tag”, you may have even bigger issues. As discussed above, the user experience should replicate analytical processes that the team is already used to performing without the tool. With that in mind, it should be simple for analysts to jump into a new TIP and find their way through the system.

More than just a sales pitch and flashy demo

Identifying the best solution should take more consideration than price points and system infrastructure requirements. Before you kick out thousands of dollars, a quick test drive is in order. Organizations should bring in the best 3–5 solutions that they have found through research and have those vendors provide a briefing/training session with the teams that will use the systems. Ideally, the analysts should get hands-on time with the tools without the vendors in the room so they can evaluate the systems against their needs and speak freely as an evaluation team.

During the evaluations, each tool should be run through a series tests (e.g. upload PDF, ingest website, export, etc.) that measure how well a solution meets not only your identified requirements, but also how well it fits your current processes. Before testing begins, leadership and analysts should develop evaluation scenarios and scorecards, based on existing workflows and identified needs, to run each platform through to ensure it meets the team’s requirements and to ensure that each system is evaluated equally. It is incredibly important to ensure your evaluation team does not stray from already identified requirements. If during the evaluation, analysts or leadership demand a solution meets a requirement not previously disclosed to a vendor, they will only cause frustration for both sides of the evaluation process. When evaluating vendors, analysts should rate functionality on a gradient scale like:

  1. The system does ____ task better than the identified requirements suggest.
  2. The system does ____ task well.
  3. The system does ____ task.
  4. The system supports ____ task, but it is not intuitive, requires a work around, or requires too many steps.
  5. The system does not support this task.

A simpler scoring method may look something like:

  • Meets Requirement
  • Does Not Meet Requirement
  • Partially Met

These scores should be determined by each individual analyst and then added up for comparison. Each evaluation should include an overall experience score of 1–5 to capture the user experience- as well as a written feedback section that is broken down by Pro’s, Con’s, and Vendor Feedback. These notes can be critical for the decision makers at the end of the project to assess costs, requirements alignment, etc. This documented methodology also minimizes business risk to the organization.

A note for leadership — provide your analysts an opportunity to have an open conversation with their fellow analysts during testing and evaluation. If leadership attends these evaluation sessions, consider focusing on your quiet observation in order to ensure that analysts are comfortable collaborating and exploring the functionality of the systems with their peers. This is not the time to be overbearing, focused on getting a task done, or keeping things in scope. Analysts need to be able to explore the tools since this will make them better analysts, make their evaluations more accurate, and even help build relationships within the teams.

In Conclusion…

So now we know that we want an analyst platform that is geared towards our analysts’ needs and current working processes. We want a tool that helps us save time, build collaborative relationships, and encourage analysis. We need a system that helps us understand the threats to our enterprise.

We do not want to dump noise into our system — rendering it unusable for our analysts. We do not want a system that makes determinations about which IOCs get uploaded in the system. We do not need a super-tool that does everything for the analysts without interaction.

We need a solution for analysts. We need to enable them to work more effectively before worrying about helping them work more efficiently. We need solutions.

At this point you may be thinking that it is weird that I didn’t mention a single vendor. There are a few reasons for that: 1) my articles are intended to help organizations make the best decisions for themselves 2) there are a lot of rad companies out there with varying takes on their solutions 3) I don’t need the grief of forgetting to mention one awesome vendor over the other. I have worked with many of them over the years and there are some awesome companies in the space. From the group of analysts that decided to build a better tool for themselves to the companies that are hosting conferences that bring analysts together- there’s a lot of cool stuff going on in this field. It’s not all snake-oil and aggressive competition. Find a tool that meets your needs that is supported by company that stands behind their tool and works with your analysts. Remember — good requirements drive awesome solutions. Do not build a solution and then come up with requirements to support it.

If you absolutely want my direct feedback and opinion, feel free to reach out. I am more than happy to discuss and can be found on Twitter @klrgrz and on LinkedIn at https://www.linkedin.com/in/andypiazza/

Resources (UPDATED!)

[embed]https://www.youtube.com/watch?v=ynm90wZLjNY[/embed][embed]https://medium.com/@andy.c.piazza/considerations-for-leveraging-cyber-threat-feeds-effectively-1d1cfa9fb140[/embed][embed]https://medium.com/@andy.c.piazza/considerations-for-leveraging-cyber-threat-feeds-effectively-1d1cfa9fb140[/embed][embed]https://medium.com/@andy.c.piazza/considerations-for-leveraging-cyber-threat-feeds-effectively-1d1cfa9fb140[/embed][embed]https://medium.com/@andy.c.piazza/considerations-for-leveraging-cyber-threat-feeds-effectively-1d1cfa9fb140[/embed][embed]https://medium.com/@andy.c.piazza/considerations-for-leveraging-cyber-threat-feeds-effectively-1d1cfa9fb140[/embed]

Previous
Previous

ATT&CKing Threat Management

Next
Next

Considerations for Leveraging Cyber Threat Feeds Effectively