First Up, The OWASP Top 10

Many of you who work or have backgrounds in web application development or cyber security will be familiar with the OWASP Top 10 project. The OWASP Top 10 is a standard awareness document for developers and web application security, which represents a broad consensus about the most critical security risks (vulnerabilities) applicable to web applications.

The OWASP Top 10 was refreshed fairly recently to create the OWASP Top 10 for 2021 ( https://owasp.org/Top10/).

Background To The phew Top 10

Although at phew we perform a wide variety of penetration testing (from wired and wireless private and public network testing, to complex Active Directly domains and Operational Technology), we have particular expertise in web application, web portal, web store and API penetration testing, so it is consequently something we do a lot of.

We thought it would be interesting to look back on a year of our own testing and to quantify the vulnerability types that we see most often – our own “phew Top 10”, if you like.

The Methodology

In compiling the phew Top 10, we used a year’s worth of tests, and the results are in descending order of how often we perform a test and then report this category of finding (rather than in terms of how many separate instances of this vulnerability category we find in aggregate).

At phew we report on a more detailed basis than OWASP Top 10 categories, so the categories we’re reporting here are in some cases sub-categories of OWASP Top 10. These categories are mapped to the most applicable CWEs, and we find these to be most helpful to our customers, compared to OWASP Top 10 categories.

The Findings

Here is a summary table of our findings:

Key Takeaway

We can see that each of the phew Top 10 overlaps into OWASP Top 10 categories, so in this sense our findings mirror the broader findings of the OWASP Top 10.

So What’s At Number One?

Over 70% of our tests found outdated, unsupported, or vulnerable components being utilised in the underlying platform, frameworks and dependencies. And almost all of these components have patches available for them. Some thoughts on this:

  • This would be a good time to reiterate the advice in our blog post “Discovering and Patching Vulnerabilities”
  • The vast majority of the targets were running software components with publicly disclosed vulnerabilities for which there are patches currently available.
  • In summary – most organisations do not have sufficiently reliable vulnerability discovery and patch management programmes. Even if their own code is completely secure (which it typically isn’t), that code integrates or depends on someone else’s software that has vulnerabilities, and that continues to be a blind spot, hence the unenviable achievement of Number One in in the phew Top 10.

The Remainder Of The Top 10

Items 2-9 are common web application architecture or development errors. What’s happening here?

  • We do not think that blame is due to developers here. The fact that these continue to be common vulnerabilities, over years of penetration testing, and years of OWASP Top 10s, means that there are other causes at play.
  • In the same way that prize-winning authors require multiple rounds of editor input before they go to print (and we are all familiar with the errors that still sneak through) something similar is true with writing code and configuring systems. Developers and DevOps teams can’t be expected to meet all of their important obligations, in terms of feature requirements and time to market, and to also never make security mistakes.
  • Test suites and other tooling used in the deployment pipeline have a role to play here, but penetration testing by friendly “adversaries” who are skilled in taking an offensive position against the deployed product is also of critical value to even the most experienced and tooled-up teams.
  • Just as you can’t mark your own exams, and just as large or public organisations require independent financial auditing, independent penetration testing continues to support DevSecOps teams in finding security mistakes in coding and configuration, prior to those mistakes becoming real-world risks to the customer.

Lastly, and although it came in at number 10, it is worth noting that almost 10% of our tests discovered the use of default or weak authentication credentials. This is evidence of a lack of policy, or policy enforcement, around credentials, and even for credentials that might give elevated privileges in target systems.

  • Because almost all systems are constructed from combinations of sub-systems or components, rather than all being hand-coded from the silicon up, it is important to have robust policies, procedures and processes to ensure that those sub-components are actively configured to not be the weak link in the security chain.

Why The Top 10 Matters

There were a total of 58 different categories or sub-categories of vulnerability reported on by phew in the sample period, so it’s clear that this is about more than just the OWASP Top 10 (or the phew Top 10).

However, in terms of understanding the key risks in this area, it is important to get the Top 10 right because those are the most common vulnerabilities, and the most common things that bots and attackers are thinking about (at least at initial access stage).

Please do get in touch if you’d like to leverage phew’s experience and expertise to test the security of your web and mobile applications. And let’s see if you have the phew Top 10 (and Top 58) well covered!