rankseo.studio· /blog
EN/
./blog / 02· #recent
By J. Ho·Published May 01, 2026·8 min

Self-cannibalisation in AI Overviews: when two of your own pages compete for the same answer slot

Meta
Published
May 01, 2026
Author
Reading
8 min
Tag
#recent

**TL;DR** — Across 24 client sites in April 2026 we tracked how often Google AI Overviews pulls two URLs from the same domain into a single answer card. Self-cannibalisation now affects roughly 18% of cited domains in our basket, and it measurably degrades the visibility of every query where it happens — the composer blends overlapping pages into a weaker, less specific answer than either page would have produced alone. Three structural moves contain it: a single canonical answer page per intent, a clear sub-topic hierarchy in your URL graph, and an internal-link pattern that points the composer at the canonical page first. The fix is mostly link-graph work; almost none of it requires new content.

The 2024 cannibalisation problem is back in a different shape

In 2024 we wrote about classic SERP cannibalisation: two of your URLs compete for the same query, both rank somewhere on page one, neither ranks as well as a single canonical page would. The fix was usually a 301, a noindex, or a content merge. In 2026 the cannibalisation moves up a layer — into the AI Overview itself. The composer is willing to cite multiple URLs from the same domain inside one answer card, and when those URLs cover overlapping subject matter the resulting prose drifts into vague middle ground. The phenomenon is not a ranking problem in the classical sense; it is a synthesis problem.

It is also harder to see in standard reports. GSC will show you that two URLs were cited; the AI-search reporting tools we use will show you the citation count by domain. Neither view tells you the answer text got worse because of the overlap. We only noticed because a recurring client question kept being "why does the answer summary feel watered down on this query, even though we are cited?" — and on those queries we were always cited twice.

How we measured it

24 client sites — a mix of SaaS, DTC, B2B and two media properties — each with a 60-query basket. Weekly captures of the AI Overview answer card via Playwright, recording both the cited URL set and the prose text. We coded each capture into one of three states: single domain citation, multi-domain citations only (no two URLs from the same domain), and self-cannibalised (two or more URLs from the same domain). We then scored the prose answer on three dimensions a junior strategist could rate from the screenshot alone: specificity, keyword targeting, and brand presence.

The headline number — 18% of cited domains showed self-cannibalisation in at least one query during the month — is not by itself alarming. The damaging finding is what happens to the prose on those queries. Self-cannibalised answers scored 31% lower on specificity and 22% lower on brand presence than single-citation answers from the same domain. The composer fights itself when fed two competing pages and the result reads like a forced compromise. The user reads a less useful answer; the brand looks less authoritative; the click probability drops.

Which page topologies invite cannibalisation

Three patterns dominated. First, an old "ultimate guide" co-existing with a newer, narrower "definitive answer" article on the same topic — the kind of editorial output that piles up at any content team after eighteen months. The composer reads both, and the prose answer awkwardly tries to honour both framings. The fix is consolidation: pick which page is canonical, 301 the other or restructure it into a sub-topic that is genuinely distinct, and update the internal links. We have done this on six client engagements this quarter and the average answer-card improvement was visible inside two crawl cycles.

Second, parallel landing pages built for different campaigns or audiences but pointing at the same intent. SaaS clients with a "for engineers" landing page and a "for engineering managers" landing page both targeting the same product feature were the worst offenders — Google's composer cannot tell which audience is the question being asked, so it cites both and the answer reads to nobody. Either one of those pages should be canonical for the feature query and the other should be sub-routed under a different intent (use cases, integrations, pricing).

Third, blog-versus-resource-versus-glossary triplets. A glossary entry, a blog explainer, and a long-form resource page all aiming at "what is X" from slightly different angles. The composer treats them as three sources of the same fact and the resulting paraphrase loses the strength any one of the three originally had. Glossary entries should be terse definitions that sit upstream in the link graph and explicitly link to the canonical answer page. The blog post is for narrative; the resource is for depth; the glossary is the routing layer.

What the link graph has to do with it

The single highest-leverage fix we have found is the internal-link pattern. The composer is using internal links as a confidence signal about which page is the canonical answer for an intent. When five pages on your site link to page A with an anchor matching the query, and three pages link to page B with a similar anchor, the composer often surfaces both. When fifteen pages link to page A with the canonical anchor and the rest link to page A from page B as a sub-topic reference, the composer surfaces page A and treats page B as supporting context rather than a peer source.

Concretely, we run a checklist on every cannibalised query: pick the canonical page; ensure its title and H1 contain the exact anchor we want the composer to read; audit the top 20 internal links on the site for that anchor and re-route any that currently point at competing pages; add a single contextual link from each competing page to the canonical page using the canonical anchor. Done as a batch, the work fits in a half-day and the answer-card improvement is observable inside the next two crawl cycles. The mistake is doing it page by page; the link graph is the unit of change.

When two cited URLs is actually fine

Not every dual citation is cannibalisation. The composer routinely cites a category page and a product detail page from the same domain when the query has both navigational and informational intent — that is not a problem, it is the AI Overview behaving correctly. The diagnostic is whether the two URLs cover overlapping topical ground. A category page and a product page do not; a glossary entry and a blog explainer about the same term often do. We score this by hand for now — the heuristic is whether you could merge the two pages without losing distinct user value, and if you could, you probably should.

There is also a small but real positive case for parallel pages: when a query has two sub-intents (for example, "how does X work" and "is X right for me") and you have a strong page for each, the composer will sometimes surface both and produce a richer answer than either page would have alone. This is the architecture you want to be in — you do not get there by accident, and you do not get there by ignoring the cannibalisation lower in the funnel.

What changed in our content audits

Two additions. We now run a quarterly "self-citation overlap" pass on every client basket — for each query where the client is cited, we record whether more than one URL from the client domain appears, and whether those URLs share intent. The audit takes about half a day per client and surfaces, on average, four to six cannibalised intents that were invisible in the standard rank tracker. The fixes that follow are mostly link-graph work, occasionally a content merge, and only rarely require new writing.

We also dropped one habit. Through 2024 we measured "indexed pages" as a positive — more is better, more is coverage. In 2026 we measure "indexed pages per intent" and treat the ratio as a leading indicator. A site with five indexed pages per commercial intent is not five times more visible than a site with one; in AI Overviews it is often less visible, because the composer has more chances to blend the message into mush. Coverage is good; redundancy is not the same as coverage.

  • 01Audit your top 60 cited queries weekly for self-cannibalisation — record where two or more URLs from your domain appear in the same answer card. Roughly 18% of cited domains show it in at least one query per month.
  • 02Score the prose answer on those queries for specificity and brand presence. Cannibalised answers scored 31% lower on specificity in our sample; the visibility metric is the prose, not just the citation.
  • 03Pick a canonical page per intent. Re-route the top internal links to the canonical, add a sub-topic link from competing pages, and let the composer re-read the link graph. The fix is usually a half-day and observable inside two crawl cycles.
  • 04Track "indexed pages per intent" as a leading indicator. More indexed pages on the same intent is not coverage, it is risk — the composer will blend competing pages into a weaker answer.

Where this argument breaks

For sites under about 200 indexed pages, self-cannibalisation is rare and the audit is overhead — single-author personal sites and early-stage startups can usually skip the work and revisit it once the library grows past the threshold. For large publishers and marketplaces, the inverse problem applies: deliberate parallel coverage is a feature, not a bug, and the audit needs a custom intent taxonomy before the metric makes sense. In Chinese-language search, 文心 and 通义 currently behave differently around dual citations from the same domain — the cannibalisation effect is muted on those engines but the structural advice still applies. Outside those carve-outs, self-cannibalisation is the unmonitored quality leak hiding inside otherwise positive AI-search dashboards — the citations look fine, the answer card is worse than it should be, and the only place the gap is visible is in the prose.

Further reading
/ KEEP READING
Previous
Author bylines in AI Overviews: 30 days of E-E-A-T signal data in 2026
May 02, 2026
Next
YouTube clips inside AI Overviews: 30 days of video citation data in 2026
Apr 30, 2026

Want to see how this runs on your own site?

Drop your URL and email — we'll send a free standard SEO diagnostic.