**TL;DR** — In April 2026 we tracked 60 commercial queries across ChatGPT search, Perplexity and Google Gemini, asking each engine the same question on the same day. The cited URL sets overlapped by only 31% on average — and the quotation behaviour, source preferences and recency weighting differ enough that one general GEO playbook is no longer enough. Each engine needs its own optimisation move.
How we ran the audit
Sixty queries — split evenly across SaaS, DTC, B2B and consumer — sent through ChatGPT search, Perplexity Pro and Google Gemini on the same weekday inside a quiet 30-minute window to minimise live-traffic noise. We logged every cited URL, the order it appeared in inside the answer, and whether the engine quoted the source verbatim or paraphrased it. Each query was repeated three times across two weeks; only URLs that showed up at least twice were kept. Anything appearing once was treated as noise.
One caveat up front: each engine personalises differently and we did not use logged-in profiles. The numbers below are for fresh sessions on the same IP block. With logged-in personalisation the divergence widens; with same-day re-runs from a clean profile, the noise floor sits around 15%. That is the threshold below which we would not call a movement real.
What the citation overlap actually looks like
Pairwise overlap was: ChatGPT × Perplexity 38%, ChatGPT × Gemini 28%, Perplexity × Gemini 27%. The three-way overlap — URLs cited by all three engines on the same query — was 14%. That 14% is the surface a "general GEO" investment buys you. The remaining 86% is per-engine: a URL cited by Perplexity that nothing else surfaces, or a URL Gemini pulls into its answer that ChatGPT silently ignores. Two clients optimising for the same head term will get materially different traffic profiles depending on which engine their audience prefers.
The shape of the per-engine preference is also different from what the marketing decks suggest. Perplexity leans heaviest into freshness — 52% of its cited URLs were less than 90 days old, vs. 31% for ChatGPT and 28% for Gemini. ChatGPT prefers domain authority; 64% of its cited domains were inside the top 1,000 by Tranco rank. Gemini biases toward Google's own surfaces — Wikipedia, Google Scholar, and high-trust news properties — at a rate measurably higher than the other two. None of these surprised us individually; the gap between them is what most teams underestimate.
Quotation versus paraphrase
When you actually read the answers, the engines lift text differently. Perplexity quotes most aggressively: 41% of its sentences in our sample were near-verbatim from a source page. ChatGPT paraphrases more: 23% near-verbatim. Gemini sits in between at 31%. This matters because the lift-out paragraph work that pays for AI Overviews — short, declarative, attribution-friendly chunks — pays best on Perplexity, then Gemini, then ChatGPT. On ChatGPT, what gets you cited is more often the page's overall topical density and entity coverage, not a single quotable sentence.
There is a corollary worth being explicit about: if your PR strategy is built on "we want people to be able to copy a clean quote into Slack," Perplexity is the engine that rewards it most. If your strategy is "we want to be one of three sources cited inside a longer synthesis," ChatGPT is more likely to pick you up. A GEO program that does not distinguish the two ends up shaping content for the wrong engine and underperforming on both. We now write the brief differently depending on which engine the client's audience uses; the same writer, the same topic, two different drafts.
How recency weighting interacts with content cadence
Perplexity's freshness bias has a knock-on effect: refreshing a page does not necessarily refresh its citation eligibility on Perplexity. We tested this by updating four pages with new examples and a current `dateModified` — citations on Perplexity recovered for two and never recovered for two. Reading the answer text suggests the engine reads some content as "old style" even after the dateline updates, possibly because it caches the embedding and only re-embeds on a stronger signal (URL change, large content delta, structural rewrite). On Gemini and ChatGPT, the same updates moved citations within a week.
Practically: if Perplexity matters to your category, treat publication as the primary lever, not refresh. A fresh URL with a clear publication date and a coherent topical structure outperforms a refreshed older URL even when the underlying content is identical. This is the opposite of the conventional Google-era refresh playbook, and it is a real cost line in 2026 GEO programs — fresh URLs cost more to build, internally link in, and earn external mentions for. We have stopped recommending the "republish under the old URL" trick on Perplexity-sensitive pages, even though it still works on Google.
What changes in our weekly process
We added an "engine-cited" matrix to every client review: for each tracked query, which of the three engines cite us, and where each cited URL sits in the answer order. The pattern we look for is "cited on one but not the others" — that is exactly where a per-engine fix exists. URLs cited by all three are stable; URLs cited by none need a different conversation about whether the page deserves to be cited at all. We also track whether the engine cites us in the first source position vs the third, because answer-order correlates with how much of the engine's text actually paraphrases that source.
- 01Run the same 30 commercial queries through ChatGPT, Perplexity and Gemini weekly. Without the cross-engine matrix you cannot tell whether a citation drop is engine-specific or category-wide — and the fixes are different.
- 02Treat lift-out paragraphs as a Perplexity-first investment and topical density as a ChatGPT-first investment. The same page can do both, but the briefs are different and the reviewers should be different.
- 03Publish, do not refresh, when Perplexity matters. A new URL with a fresh date and coherent topical structure beats an updated old URL on Perplexity in our sample, even with identical content.
- 04Audit your `sameAs` and Wikipedia presence quarterly. Gemini's Google-surface bias means a missing or weak Wikipedia entry costs you on Gemini even when ChatGPT and Perplexity cite you.
Where this argument breaks
This audit covered English-language Western queries only. Chinese-language AI search (百度文心 / 元宝 / Kimi 等) shows different overlap patterns and different freshness mechanics — we will publish that audit separately. For regulated verticals — medical, legal, financial — all three engines collapse toward the same handful of institutional sources, and the divergence we describe largely disappears. The cross-engine work pays best in the broad commercial middle: SaaS, DTC, B2B services, consumer technology. Outside that, treat the numbers above as directional rather than load-bearing for budget decisions.