**TL;DR** — 60 days after AI Overviews became the default treatment on most commercial queries, our client cohort lost an average 17% of organic Google clicks but gained a measurable, growing trickle of referral sessions from ChatGPT, Perplexity and Gemini. GA4's defaults route most of that traffic to "Direct" or "Unassigned." The setup below — custom channel grouping, server-side referrer normalisation, and a small landing-page tagger — recovered 71% of the missing visibility for our weekly review.
Why GA4's defaults hide AI-search traffic
When ChatGPT cites you with a clickable source on the desktop web, the user lands with a clean `referer: chatgpt.com`. That is the easy case. Three things break it. ChatGPT's mobile apps on iOS and Android send the click without a referrer at all on roughly 60% of sessions — those visits land in the Direct bucket. Perplexity rewrites referrers through `perplexity.ai/search/...` paths that vary enough to fragment the report by sub-path. Gemini opens many citations through Google redirect URLs that strip down to "google" — indistinguishable from organic search in your default GA4 view. Without intervention, roughly a third of your AI-search traffic sits in Direct and another fraction is silently double-counted as Google organic.
The practical effect on a 90-day report: ChatGPT, Perplexity and Gemini together looked like 0.4% of sessions on the default GA4 view. Once we re-attributed correctly with the setup below, it was 3.1% — and trending up week over week. The number is small now, but the slope matters more than the level. The same shape is what made "social" worth measuring in 2014; teams that waited for the absolute number to get big were a year late on building the dashboards.
The custom channel grouping we now ship
In Admin → Data Display → Channel Groups, we create a custom group called "AI Search" with three rules: source contains `chatgpt`, source contains `perplexity`, or source contains `gemini`. Below that, we add three sibling channels for ChatGPT-attributed, Perplexity-attributed and Gemini-attributed traffic separately, so the weekly review can show them disaggregated. The standard "Organic Search" channel is left untouched; we never want to retroactively rewrite Google clicks into AI clicks just to make a chart look bigger. The point of the disaggregation is that the conversion rate, the landing pages and the queries are all different per engine — combining them hides the work.
The grouping fixes the reporting layer but not the data layer. To fix the source itself, we add a small redirect-tagger on landing pages: a 1KB script behind a Cloudflare Worker that reads the incoming `Referer` header server-side, normalises common AI hosts (chatgpt.com, openai.com, perplexity.ai, gemini.google.com, bard.google.com, copilot.microsoft.com) to canonical names, and writes the result to a `gtag("event", "ai_referrer", { source })` call before GA4 fires its first hit. For iOS-no-referrer sessions we cross-check the User-Agent on the Worker to flag the ChatGPT iOS app — the UA string is stable enough to be a reliable secondary signal. Server-side tagging catches what client-side `document.referrer` cannot.
UTMs: when AI engines preserve them, when they do not
A common workaround is "make the AI cite UTM-tagged URLs." It does not survive contact with reality. Across our cohort, Perplexity preserves UTM parameters in cited URLs about 90% of the time. ChatGPT preserves them about 40% of the time, with desktop higher than mobile. Gemini rewrites cited URLs through Google redirects often enough that UTM parameters survive only 25% of the time. The takeaway: do not rely on UTMs as your primary source of truth. Use them as a corroborating signal where they survive, and use the channel grouping plus the referrer-tagger as the load-bearing measurement. Teams that built their AI-traffic dashboard on UTMs alone are reporting noise — sometimes 70% of the actual traffic is missing from the chart.
A subtle gotcha: Perplexity's preserved UTMs come through with a different URL fragment ordering than your canonical version, which fragments the report by URL even when the destination page is identical. We collapse this in our Looker Studio data layer with a regex that strips the UTM block before grouping, then re-joins on a separate `ai_traffic` flag derived from our server-side event. None of this is hard; you just need someone willing to write the SQL once and then leave it alone. The mistake we keep seeing is writing the same logic three different ways in three different dashboards and then arguing about which number is right.
What the numbers actually look like
Across 14 client sites measured for 60 days, AI-search referral sessions grew from 0.6% to 3.1% of total organic-equivalent traffic. Conversion rates from AI-search referrals are higher than from Google organic — 4.2% vs 2.7% across the cohort — likely because the user has already read a synthesised answer and is clicking through with intent rather than curiosity. The traffic is small; the qualified-lead share inside it is disproportionate. We now report AI-search sessions, AI-search-attributed conversions, and AI-search-attributed pipeline as three separate columns alongside the classic organic columns. The boards that used to read "organic clicks down 17%" now read "organic clicks down 17%, AI-search conversions up 380% off a tiny base" — a much more honest summary of the real direction.
Per-engine, the picture diverges. ChatGPT sends the largest share (54% of AI-search sessions in our cohort) but the lowest conversion rate (3.6%). Perplexity sends less volume (28%) but the highest conversion rate (5.9%) — the user has already seen quoted text and is clicking to verify, which selects for higher intent. Gemini is the smallest contributor (18%) and sits in between. If you only look at the combined number you cannot see that Perplexity is the cheapest customer-acquisition channel inside AI search; the per-engine split is what makes the budget conversation possible.
What changes in the weekly report
One column per AI engine, not one combined "AI" total. The error modes are different and lumping them together hides the work. We also added a "queries we are cited on" column pulled from a separate citation tracker (we run our own; Profound and Goodie also work) and join the citation count to the GA4 referral count to spot pages cited but not getting clicks — usually a thumbnail, title or snippet problem on the source page, not a citation problem. Cited but not clicked is a fixable category; not cited at all is a content problem. They should not share a row in the report, and our previous template did not separate them — that is a recent change for us.
- 01Build a custom GA4 Channel Group with separate ChatGPT / Perplexity / Gemini channels. The default channel grouping under-counts AI traffic by ~70% in our cohort and routes most of it to Direct.
- 02Add a server-side referrer-tagger via Cloudflare Worker (or your equivalent edge layer) to catch iOS-app sessions with no client referrer. The UA string is your fallback signal.
- 03Treat UTMs as corroborating, not load-bearing. Perplexity preserves them; ChatGPT and Gemini drop them often enough to break a UTM-only setup.
- 04Report AI-search clicks, conversions and pipeline alongside Google organic — never inside it. The conversion-rate gap (4.2% vs 2.7% in our sample) is too large to bury in a single bucket.
Where this argument breaks
The server-side referrer-tagger is overkill for sites under 100k sessions/month; the absolute AI-search traffic is too small to justify the engineering. For those sites, the channel grouping alone is enough. The numbers above assume English-language Western markets; Chinese-language AI search (元宝, 文心一言, Kimi) sends traffic with different referrer patterns and the UTM survival rates are different — we have a separate setup for the China cohort that we will publish once we have 60 days of clean data. And for sites where AI traffic is genuinely below the noise floor — heavily-regulated B2B verticals and a few enterprise SaaS niches — the work is not yet ROI-positive. Build the channel grouping anyway; you will need it within twelve months and the cost of doing it now is one afternoon.