Understanding Garbage Findings

Understanding Garbage Findings
Mic Whitehorn
Author: Mic Whitehorn
Share:

There is a well-meaning desire among penetration testers to produce findings. The fact of the matter is that we want to deliver value, and a report with few findings can feel like a failure to do so. This can lead to the inclusion of what I term Garbage findings.  The problem with Garbage findings is that:

  1. They offer no real value on their own, potentially distracting from more meaningful issues
  2. Remediating them is largely a wasted effort
  3. When the findings are shared with an audience without the expertise to recognize them as invalid, they can reflect negatively on the team or organization responsible for the tested scope
  4. They undermine the credibility of the tester(s) and the assessment that was performed

In this post, I will break Garbage findings down into their core attributes and discuss how to recognize them. I will be addressing them from an Application Penetration Testing perspective, but the principle applies to all manner of penetration tests. While I use a few specific examples here, the intention is not to get overly bogged down in litigating specific findings, but rather to highlight their characteristics and thought-processes for evaluating whether a finding is worthwhile or Garbage.

I also want to note that every experienced penetration tester has written some Garbage findings. If you encounter one who believes they have not, they are mistaken. They are also demonstrating a lack of self-awareness, meaning they are more prone to falling into the trap of writing Garbage findings. After all, the best defense against writing Garbage findings is to become better at recognizing when we ourselves are writing a Garbage finding. This type of findings  can occur as a one off on a given test, or as a recurring issue with a finding that we thought was good, but are overdue to re-evaluate. The information we had when we originally made a decision may look different in today's light.

The most important attributes to this discussion are risk, impact and conditions. To be more specific about how I'm using those terms, what (quantitatively or qualitatively) is the risk presented by the finding? What damage (in general terms) can be done if the issue presented by the finding were to factor into an accidental or intentional exploitation? How would this damage impact the business? And what conditions would have to occur for the risk to be realized?

Let's start with the attribute of risk. Risk is a critical element for a finding to deliver value. A finding that never presents any risk is not a suitable finding for a penetration test. This does not mean all findings must be immediately exploitable. Risk is variable, and is commonly a key factor in the severity score of a finding. There is a key distinction between a finding that would only present risk if specific conditions are met, and one with no concrete risk potential. . In the case of conditional risk, an important consideration is whether that conditional criteria is a finding-worthy risk in and of itself. And if so, what is the interplay between our questionable finding, and that criteria? 

Let's pivot to a more concrete example. I'll pose it as a question: If your cross-site request forgery (CSRF) protection can be defeated by an attacker who has cross-site scripting (XSS) in the target user's session, what risk is presented by the flawed CSRF protection?

If we break down the answer into parts:

  • Due to the flawed CSRF protection, a successful attacker can issue blind (write-only), data-mutating requests in the context of the target user's session
  • In order to achieve the CSRF exploit, the attacker needs to establish XSS in the context of the target user's session
  • Through the use of XSS, the attacker would have the ability to issue read and write requests in the context of the target user's session

We can summarize that evaluation as: the potential for successfully leveraging a CSRF attack in this case is predicated on the attacker already having all of the access they would stand to gain by leveraging the CSRF. It is common to have to evaluate this exact scenario for a web application using a single-page app (SPA) architecture and the double-submit cookie pattern for CSRF protection, with the HttpOnly attribute absent (or set to false) on the CSRF token cookie. Strictly sticking with only the details described in this example, the CSRF neither presents a risk on its own, nor adds any novel risk to the XSS. This is a no-risk finding, so writing a Lack of CSRF Protection finding or using this as a justification for a Cookie missing HTTPOnly flag finding would result in a Garbage finding.

There are also valid cases for findings where the risk is conditional. Some other conditions need to be met before it presents any real risk. These conditions can take many forms. Perhaps they are some specific change to the environment, such as the data or configuration. Maybe some future event could occur that would cause the risk to be realized, such as disclosure of a vulnerability in vendor software. The fact that a risk is conditional doesn't inherently make the finding a Garbage finding. As a tester, I may see your application behaving in a way that is almost certainly more permissive than you intend, but there is no means of exploiting it or immediately demonstrating risk. However, if I can identify the specific conditions where the risk would become concrete, I can evaluate how they affect the finding. For example, I might determine that you have specified a fairly loose Referrer-Policy that risks leaking your route parameters to third-party services. I may also determine that, even though you use route parameters, none of them currently appear to be sensitive. However, it's a SaaS solution where new functionality is being actively developed, new routes and parameters are being added. The risk would become concrete if you made a change that 1) you are reasonably likely to make, and 2) should not normally be expected to create this risk. There's no expectation that using a sensitive value in your routes will result in third-party services receiving those values.

The value this finding provides is providing an informed position from which you can avoid realizing that risk. You can determine if that Referrer-Policy is more permissive than you need, and mitigate the risk preemptively. You might also determine that you really don't want sensitive values in your routes at all, and make a design decision to ensure random identifiers are the only values used in route parameters.  In either case, this finding truly identifies a case where a risk had a reasonable probability of being realized, and it was possible to mitigate it.

Let's consider one more example. This time, let’s evaluate a finding that really does not present a risk, and would only arguably be a risk if some change occurred that is itself a more severe risk. Furthermore, addressing the original finding would not effectively mitigate that more severe risk if such a change did occur. To be specific, we will say it's an unpatched JavaScript library in your application. Even though it is unpatched, it's still a supported version, and has no known or publicly disclosed vulnerabilities. It's just not the latest patch level. Let's enumerate the key facts here again:

  • You have unpatched software in the form of a third-party package
  • It has no known flaws, there is no identified way that it can be abused or exploited today, even conditionally
  • If a zero-day vulnerability drops, you will need to wait for a patch, and apply it.
  • Importantly, even if you were fully patched today, and a zero-day dropped, you would still need to wait for a patch and apply it

Since the unpatched library finding on its own presents no risk, and the only identified condition where it would become a risk is a condition that is not effectively mitigated by patching the library today, you have a garbage finding. With that being said, patching would still be a great idea. The fact that it would be good to patch does not make a missing patch finding suitable for a pentest report. I also want to note that the decision that this was a garbage finding was dependent on the earlier statement that the library is a supported version. If your library was no-longer a supported major version, this would compound the risk of the zero-day since a migration to a supported version would be necessary before a patch could be applied, delaying your organization’s ability to remediate the risk posed by that zero-day. If this were the case, a finding would certainly be warranted.

The key takeaway here is that we all, within this profession, need to be cognizant of when we are writing garbage findings. It is important to go through the mental exercise of thinking about:

  • What risk is actually there?
  • What conditions are required for the risk to be realized?
  • Do those conditions come with built-in risk of their own? If so, does the finding still present its own novel risk or augmentation of conditional risks?
  • Does remediating the original finding prevent risk in a meaningful way?

And remember, just because it's a garbage finding, this does not mean it has no place in the report at all, or should not be passed on to the client. As a security professional, use your judgement to do what's best for the client. Just keep the garbage out of the findings section.

Did you enjoy reading this article? Read more from Mic!

https://www.secureideas.com/blog/2019/03/better-api-penetration-testing-with-postman-part-1.html

https://www.secureideas.com/blog/2019/03/better-api-penetration-testing-with-postman-part-2.html

https://www.secureideas.com/blog/whats-new-in-the-owasp-proactive-controls-for-2024

 

Join the Professionally Evil newsletter