New to CAPEC? Start Here
Home > Documents > CAPEC User Summit Transcript - Fil Filiposki  

CAPEC User Summit Transcript - “The Missing Piece in Vulnerability Management”

Fil Filiposki, AttackForge


Session 1 - Pen Testing and Execution Flows | View all Summit transcripts



Speaker: Navaneeth Krishnan Subramanian (Session Host)

I want to introduce our next speaker, Fil Filiposki, he is the product owner and lead architect of AttackForge, which you may be familiar with, is a family of Workflow management, tools for security and penetration testing and he's going to be talking today about the missing piece and vulnerability management. Take it away Fil.

Speaker: Fil Filiposki

Thank you, good morning everyone. I'm Fil, I one of the cofounders at AttackForge. We build workflow management tools for penetration testing and we use CAPEC as a vulnerability language to help pen testers communicate findings in a standardized way.

I've been in the pen testing space now for the last decade, helping companies establish and improve their pen testing programs. I've also run a consultancy for about seven years and have overseen well over a thousand pen tests.

So today, I'm gonna provide a brief overview on one of the biggest challenges we're seeing in vulnerability management and how CAPEC is part of the solution to that problem.

Let's start with: what is vulnerability management?

Vulnerability management is a process of identifying, normalizing, evaluating, treating, and then reporting on security vulnerabilities in systems.

Many sizable organizations have large teams with expensive tools and complex processes dedicated to just doing this function. They collect data and then report it to executives to let them know if the company is getting better or worse at closing those gaps.

When we consider what data goes into vulnerability management, we should consider data that comes from vulnerability scanners, such as network scanners, static analysis tools, for example, code reviews as well as pen test findings.

However, from my experience, most organizations fail to incorporate their pen test data into their vulnerability management tools and processes. And there's a good reason for it and we're going to have a look at that in a moment.

But first: why is it a problem?

Well let's consider this scenario, you have a house, and that house has 3 doors and 2 of those doors are monitored. They have sensors to tell you whether they're open or closed. However, there's one door, which is unmonitored, and you have absolutely no idea whether it's open or closed. And now let's say that's your front door, so someone can walk in anytime and you wouldn't know.

Now this is the same problem in vulnerability management. You have some data for vulnerabilities across your company's assets but you're missing the full picture. You have the data from vulnerability scanners, from SAST tools, however you're missing the data from pen testing and that's important. Pen testing data is usually complex vulnerabilities, which scanners cannot pick up. These are the ones pen testers spend days grinding away to find. And if vulnerability management can't see it, then it's invisible and unfortunately, sometimes it never gets fixed.

Next slide, please.

So here we can see how vulnerability management works in a nutshell. We have vulnerabilities on the left-hand side which are the input. We have tools and processes which do all the crunching and normalization. And output, which is the known security posture for an asset.

This is what the humans look at when figuring out what to fix and when.

Vulnerability scanners and SAST tools usually include industry references and tags such as CAPEC, CWE CVES etc. This allows for efficient and effective normalization.

You have the same vulnerabilities being discovered between tools. It's easy to know we can fix it in one place and it closes all the known gaps. However, when it comes to pen test findings, at most you might get a CVSS score, however, that information alone is not enough to help you to normalize the data.

So pen test findings generally will not have a CVE, they may have a CAPEC or CWE, if your pen testers like you. However, even if they provide this to you there is no guarantees in which way you're gonna receive it. And to make matters worse pen test data is arbitrary findings that are defined however the pen tester feels at the time.

So this makes normalization impossible.

Therefore, most vulnerability management tools cannot cope with these unknowns and they struggle to find ways to bring pen test data into vulnerability management.

Next slide, please.

So how bad is it. Let's start at the beginning. Pen testing as a mainstream practice has been around for the past couple of decades. It has its roots in the consulting world. And by the nature of consulting, it's a competitive market. Companies and individuals within those companies, they're all competing for your hard-earned dollars so everyone is trying to outdo each other.

So now we have the deliverable from a pen test usually a static PDF or Doc report. Usually, dozens or hundreds of pages long as well with a bunch of fluff. Companies spend days if not weeks trying to create the perfect report, one which justifies the thousands of dollars you just spent on their services. And because they're competitive, they all try to create new phrases and introduce new sections to make the report look impressive and to avoid the dreaded “Nessus Pen Test report”, which is making the report feel that it's being pre canned.

Therefore, you're not going to get the same report from 2 different vendors or even 2 different pen testers working within that same vendor.

If you don't believe me, you can see for yourself. Go to Google type in “GitHub Pen Test reports” the first link you'll find is to GitHub Julio Cesar Fort public repository, which is one that's quite popular and referenced a lot. It has samples from about 50 different companies all over the world. You can browse through each one and you'll start to see the problems that we're talking about.

So this is a problem. The inconsistency in our pen test findings causes a major bottleneck for vulnerability management. How do we turn those static reports into normalized findings?

The definitions and recommendations for vulnerabilities change between each person doing QA on those vulnerabilities and there's no tags to say: Hey this is actually the same one as the ones that your company found as part of those internal scanners.

And even if your vendor gives you a report in a consistent format and they're nice enough to give you some CVSS scores and maybe some CWE and CAPEC references, how the hell do you get that in your VM tools to process it? VM tools were never designed to deal with arbitrary data.

For example, a typical vulnerability may have a title, a description, a recommendation, and a proof of concept or steps to reproduce. So considering those 4 fields alone with just that data, what would you do? you could try to hash it to create some sort of unique reference?

However, the next person doing QA changes that description and recommendation ever so slightly. Is this now a new vulnerability? How does your VM tool know?

It can't and no fancy AI machine learning solution is going to help you.

And for the pen test that goes further and gives you those tags - how does your VM tool cope if I give you a CWE and a CAPEC reference as tags for this vulnerability that we found as part of the pen test - which one do you use?

Many VM tools rely on one single tag only. And pen test findings may have multiple tags. And it may not be in the right format it even expects.

So these combination of problems, starting with how pen testing is done, the deliverables which are created and the output, to the current state of VM tools, means pen test data is likely never going to see the light of day in your VM tools and processes.

Next slide, please.

So how do we actually fix it? So as you can probably tell it's not as straightforward problem to solve. It does require a combination of factors.

So firstly, we need to create an industry standard for tagging pen test findings. Something that is easy to understand and easy to use and something that VM tools can then build towards. This is the same as what's happened with CVE and CVSS.

This standard would have to be able to deal with different types of vulnerabilities to cover things like web application and API pen tests, infrastructure and wireless, mobile apps, thick clients, IoT and embedded devices, physical security audits, red teams, etc. These are all security testing activities.

We see CAPEC has a natural fit to solving this. CAPEC focuses on attack patterns, which align closely with what pen testers are finding and with the main sort of industry focus on application security as well.

However, we would also need to consider ways to address other categories, such as network defense and red team, which is more so where the attack framework focuses.

So after we have this new standard. We need to train pen testers to be able to use it. This itself will require some time and effort. However, it would allow consumers and suppliers of pen testing to finally be able to compare apples with apples.

We also need to agree on the definition of a vulnerability. This is important. MITRE and CAPEC have done an amazing job at creating a schema of really well-defined fields for a vulnerability.

However, from our experience people just make this up as they go.

So again you can see for yourself on that public GitHub repository. Just go through each of vendors report and you'll see. It's like their living on different planets. We need to agree on what a vulnerability is so tools can then predict and then design against this.

So once we have standardized tagging and vulnerability fields. We need to get this to developers and engineers and customers in machine readable formats.

Spending days fighting word to align those tables, add those captions, create those footnotes. It's not helping. Machines don't care. They need reports in XML, JSON, CSV or other common export formats that VM tools can actually read and process.

And once we have these machine-readable files, standardized tagging and structured vulnerability fields - VM tools can finally incorporate it into their processes and they can finally normalize against it. And once this is done, humans will actually be able to see it action and report on it. And ultimately more things will get seen and as a result will also get fixed.

So, in summary the solution to this problem is standardization and collaboration. We need to work together and not against each other. If we're going to stay one step ahead of the bad folks.

That's my presentation. So open to any questions if anyone has any through the chat.

More information is available — Please select a different filter.
Page Last Updated or Reviewed: April 15, 2022