rankseo.studio· /blog
EN/
./blog / 11· #technical
By A. Rivera·Published Apr 12, 2026·10 min

INP after one year: what we have learned tuning Core Web Vitals in 2026

Meta
Published
Apr 12, 2026
Author
Reading
10 min
Tag
#technical

**TL;DR** — INP replaced FID as a Core Web Vital in March 2024, so by mid-2026 we have a full year of field data on what actually moves it on real sites. Most of the lab-tooling advice is still right; the sequencing of fixes was wrong. The single highest-leverage action on every site we have tuned is reducing third-party JavaScript execution during the first interaction, not the headline frameworks debate.

What CrUX actually shows after a year

Looking at the 25-week CrUX history for the 22 client sites we measure, p75 INP medians improved by 38% across the cohort over the past year — but the improvement is wildly uneven. Sites that did the work cleared the 200ms threshold; sites that did not are still drifting in the 250–400ms band, often with a worse trend than they had at the start of 2025. The cohort split is mostly explained by one variable: whether the site shipped a real long-tasks budget or kept relying on lab Lighthouse scores. Lighthouse INP simulation is *correlated* with field INP but it is not the same number. We have shipped INP fixes that moved Lighthouse from 95 to 98 and moved field p75 by zero milliseconds, and shipped fixes that did the opposite.

The first thing we now do on a new engagement is wire up the `web-vitals` library with attribution mode so we can see *which element* the user interacted with when the slow INP fired. Without that signal, INP is just a number; with it, you can usually narrow the problem to two or three components on the page. Half of the "bad INP" reports we get from clients are actually one slow modal close handler or one third-party tag responding to a click. Once you know which component fires the long task, the fix is small. Without attribution, you spend a week refactoring the framework and INP does not budge.

The third-party tag problem

On 17 of the 22 sites, the single largest contributor to p75 INP was a third-party tag — analytics, A/B testing, customer-data platforms, support widgets, ad pixels — running synchronously inside the user's first interaction. The pattern is almost always the same: the tag listens for clicks via a delegated event handler at the document root, runs a few hundred milliseconds of work to build a payload, and posts to its own endpoint. Because the work runs on the main thread before the page can react, the user sees their click "go nowhere" for the duration of the tag's work. INP is exactly the metric that surfaces this.

The fix is rarely "remove the tag." Marketing wants the data, the support widget is contractually required, and so on. The fix is to defer the tag's work past the first interaction using `requestIdleCallback`, or to move it into a Web Worker via Partytown for tags that are eligible. We measure the deferral by capturing a `performance.mark` at the start of the click handler and another at the end, and watching the field-data distribution for that mark over the next two weeks. On the sites where this work was done seriously, p75 INP dropped 80–140ms with no functional change visible to users — a faster click outcome with the same downstream tracking. On sites where it was done halfway, the gain is half. Halfway is not 50% of the gain; it is closer to 20%.

Hydration cost on framework-heavy pages

The second-largest contributor was hydration cost on Next.js, Nuxt and SvelteKit pages that ship a large component tree to the client even when the user is on a page that does not need most of it. The fix here is genuine architectural work: islands, partial hydration, and aggressive code-splitting at the route level. The trap is that these sound expensive and they are — but the gain only shows up if you do them where the user actually clicks. We have seen teams spend a quarter on islands across the marketing site and not move INP because the slow page is the product page, not the marketing page. Pull the CrUX p75 by URL group before you decide where to invest. The page where users are interacting most is rarely the page that gets the most attention from the dev team.

A specific anti-pattern we keep finding: a generic Layout component that imports a heavy analytics SDK and a heavy A/B SDK at the top, and then is used by every route including the high-traffic landing page. Every interaction on that page pays the hydration cost of two SDKs the landing page does not need. Splitting the layout into a "marketing layout" without the SDKs and a "logged-in layout" with them, and routing pages explicitly, was the single change that moved a SaaS client from 320ms to 180ms p75 INP in two weeks — without touching a single component's logic.

Measurement discipline

Most "INP is bad on our site" conversations are over before they start because nobody agrees on which number to read. Lab Lighthouse, field CrUX 28-day p75, real-user monitoring p75, and your own debug build all give different answers. The number that Google ranks on is field CrUX p75 over the trailing 28 days at origin level, with mobile and desktop measured separately. Set that as the scoreboard, post it weekly, and ignore the others except as diagnostic tools. Teams that do this ship faster than teams that argue about which tool is right.

  • 01Wire `web-vitals` attribution mode in production. Capture the slowest INP element per page and log it; without this, every fix is a guess.
  • 02Defer or Partytown-isolate every third-party tag that runs synchronously on click. Measure the field-data deltas over two weeks, not the lab-data deltas in dev.
  • 03Audit your generic Layout components for SDK imports. Routes that do not use the SDK should not hydrate the SDK; this is usually a 30-minute fix with a 100ms+ INP win.
  • 04Pull CrUX p75 INP by URL group before architectural work. Spend the budget on the page where users are clicking most, not the prettiest page.

Where this argument breaks

INP is irrelevant on documentation-style sites where 95% of sessions are pure scroll-and-read with no interaction; the metric simply does not fire. It is also a poor proxy on heavily-interactive web apps (in-browser editors, design tools, dashboards) where the slow operations are intentionally heavy and users tolerate them. For the long middle of the web — marketing sites, ecommerce, SaaS pricing and onboarding pages, content sites with newsletter modals — INP is the metric that most directly tracks user-perceived speed in 2026, and the field-data discipline above is the work that moves it.

Further reading
/ KEEP READING
Previous
The quiet death of exact-match keyword research
Apr 18, 2026
Next
Baidu 2026 algorithm update: what actually moved
Apr 03, 2026

Want to see how this runs on your own site?

Drop your URL and email — we'll send a free standard SEO diagnostic.