**TL;DR** — One year into AI Overviews being the default treatment on commercial queries, the playbook for getting cited has stopped looking like classic SEO. The unit of work is no longer "rank the page" — it is "make a single passage answerable, attributable and worth quoting." We put the field-tested checklist below, with the parts the public guidance still does not say out loud.
What "ranking" means inside an AI Overview
AI Overviews compose an answer from a small set of source URLs and surface a citation strip beside or beneath the generated text. The interesting fact is that the citation set is *not* the same as the top of the blue-link SERP. Across 38 queries we tracked weekly through Q1 2026, the average overlap between the AI Overview citation set and positions 1–10 was 47%. Roughly half of the citations come from URLs that are also ranking; the other half come from pages on the same topic that simply contain a more quotable passage at the right structural depth.
This means the lever is no longer just "improve rankings." There are now two parallel surfaces to optimise: the classic SERP, and the answer-composition layer that decides which paragraphs are worth lifting verbatim. The two surfaces share signals but reward different shapes of writing. A page that ranks #2 with a long, narrative intro can be skipped by the answer composer in favour of a page at position #11 that opens with a single declarative paragraph and a short list. We have watched this happen often enough that "skipped at #2 / cited at #11" is now a category in our weekly review.
The passage geometry that gets quoted
When we reverse-engineered the passages actually pulled into AI Overviews across our tracked queries, three structural traits showed up over and over. First, the cited paragraph almost always sat directly under a heading whose wording paraphrased the query — not exact-match, but topically aligned. Second, the paragraph itself was 40–80 words, not 200. Long paragraphs got passed over even when their content was better; short ones got pulled even when their content was thinner. Third, the paragraph contained a self-contained claim with one or two concrete numbers, names or dates. Pages that hedged ("it depends," "in some cases") were measurably less likely to be cited than pages that committed to a number with a date attached, even when the number was uncomfortable to publish.
Practically: every page you want cited needs a "lift-out paragraph" near the top of each major section — a short, attribution-friendly answer to one sub-question, written so a reader can paste it into Slack with no surrounding context and still know what it means. Below that paragraph, you can be as long-form as you like; the long-form is what makes the page rank, the lift-out paragraph is what makes it cited. We treat these as two different writing tasks with two different reviewers.
Schema, sameAs and the entity layer
The composer is more aggressive about preferring sources whose entity is unambiguous. That sounds abstract; it cashes out as four concrete moves. Add Organization schema with `sameAs` links to your Wikidata entry, your LinkedIn page, and the obvious profile sites for your industry — most clients we audit are missing at least two of the three. Author each post with a real Person whose `sameAs` chain links to a Google Scholar or industry-specific profile, not just a Twitter handle. Use Article schema with `author` set to that Person reference, not a string. And surface a `speakable` block on pages where you genuinely have a 30–60 word answer worth reading aloud — the answer composer reads `speakable` as a hint about what the writer thinks is the quotable passage, and on pages where we added it the cited passage matched the speakable selector 71% of the time.
A small but consequential detail: the Person reference must be a stable URL on your site (we use `/about/team/<slug>/`) with its own JSON-LD that resolves to the same `@id`. The composer will silently downgrade pages where the author is a string, where the author URL 404s, or where the author page does not itself self-identify as a Person. We have seen authority shift on a site within three weeks just from fixing this — no new content, just consistent author identity surfacing.
Brand mentions outside your domain
Across our sample, the strongest predictor of getting cited was not on-page at all. It was the count of unlinked brand mentions on third-party domains the composer trusts: industry trade press, Wikipedia, GitHub READMEs, well-known Substacks, and the long tail of "best-of" round-ups. The mechanism appears to be that the composer cross-checks whether a candidate source is a known entity in the topic graph; presence in other graphs is what makes you "known." Two clients with identical on-page work and similar backlink profiles diverged sharply on AI Overview presence, and the only material difference was that the cited one had been mentioned by name in 8 trade publications in the prior six months versus the other client's 1. That gap is what we now spend digital PR budget on, not generic link-building.
- 01Add a 40–80 word lift-out paragraph at the top of each H2 section. Treat it as a distinct writing task from the long-form below it; both reviewers should sign off separately.
- 02Audit Organization and Person schema for `sameAs` coverage. Wikidata + LinkedIn + the obvious industry profile is the minimum bar in 2026; missing any of the three is a measurable hit on citation likelihood.
- 03Track AI Overview citation share for your 20 most commercial queries weekly. Note the gap between "ranks but not cited" and "cited but not ranked" — those two error modes need different fixes.
- 04Move 20–30% of link budget into unlinked brand-mention work in trade publications. The composer reads cross-domain entity recognition more than it reads dofollow links.
Where this argument breaks
There are query classes where AI Overviews are still not the dominant treatment: shopping comparison ("X vs Y"), real-time queries ("score of last night's game"), and queries where the SERP already shows a tool or interactive widget. For those, classic ranking is still the whole game and the lift-out paragraph work is overhead. There is also a regulatory line — for medical, legal and financial queries the composer is more conservative, preferring institutional sources, and on-page work alone will not move you into the citation set without underlying trust signals. Outside those carve-outs, the composer is the new top-of-page treatment and the work shape has to change to match.