Detecting Dangling SaaS Subdomains and Real Subdomain Takeovers

Detecting Dangling SaaS Subdomains and Real Subdomain Takeovers
Jordan Bonagura
Author: Jordan Bonagura
Share:

Subdomain takeover is one of those vulnerabilities that refuses to die. Every few years it gets rediscovered, scanners add more signatures, and reports get louder, but in my opinion not better. After running real world assessments for years, I kept hitting my head on the wall with tools that are flagging dozens of possible takeovers, and most of them collapse when you actually look at them.

The problem isn’t that subdomain takeover is rare. It’s that modern SaaS environments broke the assumptions scanners still rely on. What we’re left with is noise dressed up as risk.

The most misleading category I keep seeing involves dangling SaaS subdomains. A team sets up a subdomain pointing to a third‑party platform like GitHub, Heroku, Netlify, Vercel, Azure Static Web Apps, etc, and months later the project is deleted, but the DNS stays. From the outside, that looks scary. From a scanner’s point of view, it’s usually an instant “takeover.”, but from the reality? Often nothing can be claimed at all.

What a Dangling SaaS Subdomain Actually Is

At its core, the setup is simple. A subdomain points to a SaaS provider using a CNAME or, in some cases, an A record. The SaaS project gets deleted or abandoned. Nobody cares to clean up the DNS. That dangling reference might allow someone else to claim the missing resource and receive traffic for that subdomain.

Let’s see, some platforms allow reassignment. Some don’t. Some require exact project ownership. Some reuse infrastructure aggressively. Some return generic 404s even when everything is fine. Treating all of these cases the same is where scanners go off the rails.

Why Most Scanners Get This Wrong

Most takeover scanners are built around single-signal heuristics, that means they only check for:

  • “There’s a CNAME to a known SaaS provider”
  • “The HTTP response is 404”
  • “The body contains a provider error message”
  • Is DNS actually active?
  • Does it point to infrastructure that is claimable, not just SaaS-branded?
  • Is the SaaS resource really missing?
  • Does HTTP behavior support abandonment?
  • Is this first-party infrastructure, CDN edge, or third-party hosting?
  • How was this subdomain discovered in the first place?

Any one of those can trigger a finding. In isolation, none of them answer the only question that matters… Can someone else really claim this resource right now?

This leads to predictable failures. CDN edge endpoints get misclassified as SaaS. First-party infrastructure behind A records gets flagged. Active SaaS apps with custom error pages look “abandoned.” Wordlist enumeration adds speculative subdomains that never existed in DNS in the first place.

The end result we already know: long reports, low confidence, and hours spent proving that not everything you can point at is actually pluggable.

A Different Way to Look at the Problem

After getting burned by this pattern repeatedly, I stopped asking “Could this be a takeover?” and started asking “What evidence would convince me this is real?”

That mindset is what led to the development of a new tool called SaaS Subdomain Surface Scanner. The tool is intentionally developed to be boring since it doesn’t chase volume. It doesn’t flag “maybes”. It tries very hard to say “no” unless multiple independent signals agree.

Instead of one big heuristic, the implemented analysis breaks down into small, verifiable questions like:

Only when those answers line up does the tool even consider reporting a takeover.

How That Looks in Practice

Enumeration starts conservatively. Certificate Transparency is the primary source. Wordlists (--speculative) and DNS brute-force (--bruteforce) are optional and clearly marked as speculative or low confidence. Each subdomain carries its origin with it, because how something was found matters later.

DNS resolution is explicit. CNAMEs are preferred, A records are handled separately, and the resolved target is always tracked. This alone kills a huge class of false positives.

Here’s a simplified example from the resolver logic:

def resolve_dns(subdomain): 
    try: 
        answers = dns.resolver.resolve(subdomain, 'CNAME') 
        return True, f"CNAME -> {answers[0].target}" 
    except: 
        try: 
            answers = dns.resolver.resolve(subdomain, 'A') 
            return True, f"A -> {answers[0].address}" 
        except: 
            return False, None

That distinction matters later, especially when dealing with platforms like GitHub, which behave very differently depending on whether you hit them via A record or CNAME. 

SaaS detection is intentionally conservative.  If DNS evidence doesn’t make sense for takeover, the SaaS label is discarded entirely.  HTTP probing looks for provider-specific error messages and headers but never trusts them alone. 

Each result ends up with a json, something like this: 


  "subdomain": "docs.example.com", 
  "dns_active": true, 
  "dns_target": "CNAME -> example.github.io", 
  "saas": "github_pages", 
  "http_status": 404, 
  "takeover_possible": true, 
  "confidence": "high", 
  "analysis": [ 
    "DNS active pointing to SaaS", 
    "HTTP 404 from provider endpoint", 
    "Provider error message: 'there isn't a github pages site here'", 
    "Critical subdomain name" 
  ] 

If any of those signals were missing, like active project, mismatched DNS, first-party target, the tool wouldn’t be flagged at all.

GitHub as a Reality Check

GitHub is a good example of why restraint matters. A real takeover requires DNS pointing to GitHub Pages infrastructure and a GitHub-specific “no site here” response. A generic 404 alone isn’t enough. A CNAME alone isn’t enough. Miss one piece, and the risk drops to zero.

This is where most scanners fail. They stop at the first signal. The tool keeps going until the picture makes sense or doesn’t.

When This Actually Helps

This approach shines when you’re validating takeover reports, managing a large SaaS-heavy attack surface, or doing scoped work where false positives cost time and credibility. It’s also useful for research, not to prove that takeovers exist, but to understand when they actually matter.

Dangling SaaS subdomains are real. But accurate detection requires patience, context, and the discipline to say “no” far more often than “yes.” Less noise builds more trust, and in security, that matters more than flashy findings.

I’m planning to make this tool publicly available soon on my GitHub and through Professionally Evil - Secure Ideas. Once it’s out, you’ll be able to run it yourself, inspect the logic, and see how it decides which takeovers are real. My goal isn’t to scream “takeover!” at every CNAME or 404, but to give you something you can actually trust in the real world.

Stop chasing ghosts.