Core Web Vitals Reference
Complete technical reference for Core Web Vitals (LCP, INP, CLS): thresholds, common causes, fixes, measurement tools, CrUX BigQuery queries, web-vitals library usage, and SEO impact.
Overview
Core Web Vitals (CWV) are a set of three user-centric performance metrics that Google uses as ranking signals in Search. They measure loading performance, interactivity, and visual stability. Every page is assessed at the 75th percentile of real-user field data.
The three metrics are:
| Metric | Full Name | Measures | Good | Needs Improvement | Poor |
|---|---|---|---|---|---|
| LCP | Largest Contentful Paint | Loading performance | ≤ 2.5 s | ≤ 4.0 s | > 4.0 s |
| INP | Interaction to Next Paint | Interactivity | ≤ 200 ms | ≤ 500 ms | > 500 ms |
| CLS | Cumulative Layout Shift | Visual stability | ≤ 0.1 | ≤ 0.25 | > 0.25 |
A page "passes" Core Web Vitals when all three metrics meet the "Good" threshold at the 75th percentile of real-user visits.
LCP — Largest Contentful Paint
What It Measures
LCP reports the render time of the largest image, video, or text block visible within the viewport, relative to when the page first started loading. It captures the moment the user perceives the main content has loaded.
Thresholds
| Rating | Value |
|---|---|
| Good | ≤ 2.5 seconds |
| Needs Improvement | > 2.5 s and ≤ 4.0 s |
| Poor | > 4.0 seconds |
LCP Candidate Elements
The browser considers the following element types as LCP candidates:
<img>elements (including<img>inside<picture>)<image>elements inside<svg><video>elements (the poster image or first displayed frame)- Elements with a
background-imageloaded viaurl()(not CSS gradients) - Block-level elements containing text nodes or inline-level text children
The largest element may change as the page loads. The browser dispatches a new largest-contentful-paint entry each time a larger candidate appears. The final entry before user interaction is the LCP value.
Common Causes and Fixes
1. Slow server response time (TTFB)
- Use a CDN to serve content from edge locations close to users
- Cache HTML at the edge (stale-while-revalidate pattern)
- Pre-connect to required origins:
<link rel="preconnect" href="https://cdn.example.com"> - Use 103 Early Hints to let the browser start resource fetching before the full response arrives
2. Render-blocking resources
- Inline critical CSS and defer non-critical stylesheets
- Defer or async non-critical JavaScript:
<script defer src="app.js"> - Minimize CSS size (remove unused rules, minify)
- Avoid
@importin CSS (it serializes requests)
3. Slow resource load times
- Preload the LCP image:
<link rel="preload" as="image" href="hero.webp"> - Use modern formats (WebP, AVIF) with
<picture>fallbacks - Set
fetchpriority="high"on the LCP<img>element - Use responsive images with
srcsetandsizesto avoid oversized downloads - Compress images (80-85% quality is usually visually lossless)
4. Client-side rendering delay
- Server-side render (SSR) or statically generate (SSG) the LCP content
- Pre-render critical routes
- Reduce JavaScript bundle size blocking first paint
5. Web font blocking LCP text
- Use
font-display: swaporfont-display: optional - Preload the primary font:
<link rel="preload" as="font" type="font/woff2" href="font.woff2" crossorigin> - Subset fonts to only the characters needed
INP — Interaction to Next Paint
What It Measures
INP measures the latency of all click, tap, and keyboard interactions throughout the entire page lifecycle, and reports a single value that represents the worst-case interaction (approximately; outliers are excluded using a heuristic). It replaced First Input Delay (FID) as a Core Web Vital in March 2024.
Thresholds
| Rating | Value |
|---|---|
| Good | ≤ 200 milliseconds |
| Needs Improvement | > 200 ms and ≤ 500 ms |
| Poor | > 500 milliseconds |
How It Differs from FID
| FID | INP | |
|---|---|---|
| Scope | First interaction only | All interactions across the page lifecycle |
| What it measures | Input delay (time before handler runs) | Full latency: input delay + processing time + presentation delay |
| Status | Deprecated (March 2024) | Official Core Web Vital |
How INP Is Calculated
For each interaction, the browser measures three phases:
- Input delay — time from user action to event handler start (main thread may be busy)
- Processing time — time spent running event handlers
- Presentation delay — time from handler completion to next paint
The interaction latency is the sum of all three. INP selects the worst interaction, with a heuristic that ignores the very worst if there are 50+ interactions (to avoid penalizing pages for a single fluke).
Common Causes and Fixes
1. Long tasks blocking the main thread
- Break up long tasks using
scheduler.yield()(with fallback tosetTimeout(0)) - Use
requestIdleCallbackfor non-urgent work - Move heavy computation to Web Workers
2. Heavy JavaScript execution
- Code-split with dynamic
import()so only the needed code runs - Tree-shake unused exports at build time
- Defer non-critical scripts and third-party tags
- Audit and remove unused polyfills
3. Expensive event handlers
- Debounce or throttle rapid-fire handlers (scroll, resize, pointermove)
- Avoid forced synchronous layouts inside event handlers (read then write, never interleave)
- Use CSS
content-visibility: autoto skip rendering off-screen content
4. Large DOM size and expensive rendering
- Virtualize long lists (react-window, @tanstack/virtual)
- Reduce DOM depth and element count
- Use CSS
contain: layout style paintto limit the scope of browser style/layout recalculations
5. Excessive re-renders in SPA frameworks
- Memoize components (
React.memo,useMemo,useCallback) - Use transitions (
startTransition) for non-urgent state updates - Avoid synchronous state updates in event handlers that trigger large re-renders
CLS — Cumulative Layout Shift
What It Measures
CLS measures the sum of unexpected layout shift scores that occur during the entire lifespan of a page. A layout shift occurs when a visible element changes its position from one frame to the next without being triggered by user interaction.
Thresholds
| Rating | Value |
|---|---|
| Good | ≤ 0.1 |
| Needs Improvement | > 0.1 and ≤ 0.25 |
| Poor | > 0.25 |
Session Window Calculation
CLS uses a "session window" approach:
- Layout shifts are grouped into session windows. A session window starts with the first shift and ends when there is a 1-second gap with no shifts, or when the window reaches 5 seconds total.
- The CLS score is the maximum session window score (not the sum of all windows).
- Each individual shift score =
impact fraction * distance fraction.
- Impact fraction: the combined area of the unstable element's visible area before and after the shift, as a fraction of the viewport.
- Distance fraction: the greatest distance any unstable element moved, as a fraction of the viewport's largest dimension.
What Counts as a Layout Shift
- Element position changes NOT triggered by user input (clicks, taps, key presses)
- CSS animations using
transformdo NOT count (usetransform: translateY()instead oftop) - Layout shifts within 500ms of a user interaction are excluded
Common Causes and Fixes
1. Images and videos without explicit dimensions
<!-- BAD: No dimensions, causes shift when image loads -->
<img src="hero.jpg" alt="Hero">
<!-- GOOD: Explicit dimensions or aspect-ratio -->
<img src="hero.jpg" alt="Hero" width="800" height="400">
<!-- GOOD: CSS aspect-ratio -->
<style>
.hero-img { aspect-ratio: 16 / 9; width: 100%; }
</style>
2. Dynamically injected content
- Reserve space for dynamic content with
min-height - Use CSS
contain-intrinsic-sizefor lazily loaded content - Place ad slots and embeds in fixed-size containers
.ad-slot {
min-height: 250px;
min-width: 300px;
contain: layout style paint;
}
3. Web fonts causing FOIT/FOUT
- Use
font-display: optionalto eliminate layout shift entirely (uses fallback if font is not cached) - Use
font-display: swapwithsize-adjustandascent-override/descent-overrideto match fallback metrics
@font-face {
font-family: 'Custom Font';
src: url('custom.woff2') format('woff2');
font-display: optional;
}
/* Or match metrics for swap */
@font-face {
font-family: 'Adjusted Fallback';
src: local('Arial');
size-adjust: 105%;
ascent-override: 90%;
descent-override: 22%;
line-gap-override: 0%;
}
4. Ads, embeds, and iframes without reserved space
.embed-container {
position: relative;
padding-bottom: 56.25%; /* 16:9 aspect ratio */
height: 0;
overflow: hidden;
}
.embed-container iframe {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
5. Late-loading top-of-page content (banners, cookie bars)
- Place banners in the document flow above all content (so they push content down before paint)
- Or use
position: fixed/stickyso they overlay without shifting existing content
Measurement Tools
Field Data (Real Users)
| Tool | What It Provides |
|---|---|
| CrUX (Chrome User Experience Report) | 28-day rolling field data for origins and URLs with sufficient traffic. Available via BigQuery, API, and PageSpeed Insights. |
| Google Search Console — Core Web Vitals report | Groups URLs by status (Good, Needs Improvement, Poor) for both mobile and desktop. Uses CrUX data. |
| PageSpeed Insights (field section) | CrUX data for the specific URL and origin, with 75th percentile values. |
web-vitals JavaScript library | Measures CWV in real users' browsers and lets you send data to any analytics endpoint. |
Lab Data (Synthetic Testing)
| Tool | What It Provides |
|---|---|
| Lighthouse | Simulated LCP and CLS (INP requires real user interaction, not available in Lighthouse). |
| Chrome DevTools Performance panel | Record interactions and inspect LCP, CLS, and INP with full flame charts. |
| WebPageTest | Multi-step tests with filmstrip view, waterfall analysis, and CWV metrics. |
Important: Lab tools cannot fully measure INP (it requires real interactions). Always validate INP with field data.
JavaScript APIs
Use the PerformanceObserver API to observe CWV entries:
// Observe LCP
new PerformanceObserver((list) => {
const entries = list.getEntries();
const lastEntry = entries[entries.length - 1];
console.log('LCP:', lastEntry.startTime, lastEntry.element);
}).observe({ type: 'largest-contentful-paint', buffered: true });
// Observe CLS
new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (!entry.hadRecentInput) {
console.log('Layout shift:', entry.value, entry.sources);
}
}
}).observe({ type: 'layout-shift', buffered: true });
// Observe INP (via Event Timing API)
new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.interactionId) {
console.log('Interaction:', entry.name, entry.duration, 'ms');
}
}
}).observe({ type: 'event', buffered: true, durationThreshold: 16 });
web-vitals Library
The web-vitals library is Google's official JavaScript library for measuring Core Web Vitals.
Installation and Setup
npm install web-vitals
Measuring Each Vital
import { onLCP, onINP, onCLS } from 'web-vitals';
onLCP(console.log);
onINP(console.log);
onCLS(console.log);
Each callback receives a metric object:
interface Metric {
name: 'LCP' | 'INP' | 'CLS'; // Metric name
value: number; // Metric value
rating: 'good' | 'needs-improvement' | 'poor';
delta: number; // Change since last report
id: string; // Unique ID for this metric instance
entries: PerformanceEntry[]; // Underlying performance entries
navigationType: string; // 'navigate' | 'reload' | 'back-forward' | etc.
}
Sending to GA4 as Custom Events
import { onLCP, onINP, onCLS } from 'web-vitals';
function sendToGA4(metric) {
gtag('event', metric.name, {
value: Math.round(metric.name === 'CLS' ? metric.value * 1000 : metric.value),
event_category: 'Web Vitals',
event_label: metric.id,
non_interaction: true,
});
}
onLCP(sendToGA4);
onINP(sendToGA4);
onCLS(sendToGA4);
Sending to a Custom Analytics Endpoint
import { onLCP, onINP, onCLS } from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating,
delta: metric.delta,
id: metric.id,
page: window.location.pathname,
navigationType: metric.navigationType,
});
// Use sendBeacon for reliability on page unload
if (navigator.sendBeacon) {
navigator.sendBeacon('/api/vitals', body);
} else {
fetch('/api/vitals', { body, method: 'POST', keepalive: true });
}
}
onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);
Attribution Builds for Debugging
The attribution build provides detailed debugging information for each metric:
# Import from the attribution build
import { onLCP, onINP, onCLS } from 'web-vitals/attribution';
import { onLCP, onINP, onCLS } from 'web-vitals/attribution';
onLCP((metric) => {
console.log('LCP element:', metric.attribution.element);
console.log('LCP resource URL:', metric.attribution.url);
console.log('Time to first byte:', metric.attribution.timeToFirstByte);
console.log('Resource load delay:', metric.attribution.resourceLoadDelay);
console.log('Resource load time:', metric.attribution.resourceLoadDuration);
console.log('Element render delay:', metric.attribution.elementRenderDelay);
});
onINP((metric) => {
console.log('Slowest interaction target:', metric.attribution.interactionTarget);
console.log('Interaction type:', metric.attribution.interactionType);
console.log('Input delay:', metric.attribution.inputDelay);
console.log('Processing duration:', metric.attribution.processingDuration);
console.log('Presentation delay:', metric.attribution.presentationDelay);
console.log('Long animation frames:', metric.attribution.longAnimationFrameEntries);
});
onCLS((metric) => {
console.log('Largest shift target:', metric.attribution.largestShiftTarget);
console.log('Largest shift value:', metric.attribution.largestShiftValue);
console.log('Largest shift time:', metric.attribution.largestShiftTime);
console.log('Load state:', metric.attribution.loadState);
});
CrUX BigQuery Dataset
The Chrome User Experience Report (CrUX) publishes monthly and daily field data to BigQuery as a public dataset.
Dataset Structure
- Monthly tables:
chrome-ux-report.all.YYYYMM(e.g.,202501) - Daily materialized table:
chrome-ux-report.materialized.device_summary - Country-level monthly:
chrome-ux-report.country_XX.YYYYMM(e.g.,country_us.202501)
Key columns: origin, form_factor (phone/desktop/tablet), effective_connection_type, and histogram bins for each metric.
Query: CWV Scores by Origin
SELECT
origin,
form_factor.name AS device,
-- LCP
ROUND(SAFE_DIVIDE(
SUM(IF(lcp.start < 2500, lcp.density, 0)),
SUM(lcp.density)
), 4) AS lcp_good_pct,
-- INP
ROUND(SAFE_DIVIDE(
SUM(IF(inp.start < 200, inp.density, 0)),
SUM(inp.density)
), 4) AS inp_good_pct,
-- CLS
ROUND(SAFE_DIVIDE(
SUM(IF(cls.start < 0.1, cls.density, 0)),
SUM(cls.density)
), 4) AS cls_good_pct
FROM
`chrome-ux-report.all.202501`,
UNNEST(largest_contentful_paint.histogram.bin) AS lcp,
UNNEST(interaction_to_next_paint.histogram.bin) AS inp,
UNNEST(cumulative_layout_shift.histogram.bin) AS cls
WHERE
origin = 'https://www.example.com'
GROUP BY origin, device
ORDER BY device
Query: CWV by Page Type (URL-level)
SELECT
url,
form_factor.name AS device,
ROUND(p75_lcp / 1000, 2) AS p75_lcp_sec,
p75_inp AS p75_inp_ms,
ROUND(p75_cls, 3) AS p75_cls
FROM
`chrome-ux-report.materialized.device_summary`
WHERE
origin = 'https://www.example.com'
AND url LIKE '%/product/%'
AND date = (SELECT MAX(date) FROM `chrome-ux-report.materialized.device_summary`)
ORDER BY p75_lcp DESC
LIMIT 50
Query: Trending CWV Over Time
SELECT
yyyymm,
ROUND(SAFE_DIVIDE(
SUM(IF(lcp.start < 2500, lcp.density, 0)),
SUM(lcp.density)
), 4) AS lcp_good_pct,
ROUND(SAFE_DIVIDE(
SUM(IF(inp.start < 200, inp.density, 0)),
SUM(inp.density)
), 4) AS inp_good_pct,
ROUND(SAFE_DIVIDE(
SUM(IF(cls.start < 0.1, cls.density, 0)),
SUM(cls.density)
), 4) AS cls_good_pct
FROM
`chrome-ux-report.all.*`,
UNNEST(largest_contentful_paint.histogram.bin) AS lcp,
UNNEST(interaction_to_next_paint.histogram.bin) AS inp,
UNNEST(cumulative_layout_shift.histogram.bin) AS cls
WHERE
origin = 'https://www.example.com'
AND _TABLE_SUFFIX BETWEEN '202401' AND '202501'
AND form_factor.name = 'phone'
GROUP BY yyyymm
ORDER BY yyyymm
Query: Competitor Comparison
WITH origins AS (
SELECT origin
FROM UNNEST([
'https://www.yoursite.com',
'https://www.competitor-a.com',
'https://www.competitor-b.com'
]) AS origin
)
SELECT
o.origin,
form_factor.name AS device,
ROUND(SAFE_DIVIDE(
SUM(IF(lcp.start < 2500, lcp.density, 0)),
SUM(lcp.density)
), 4) AS lcp_good_pct,
ROUND(SAFE_DIVIDE(
SUM(IF(inp.start < 200, inp.density, 0)),
SUM(inp.density)
), 4) AS inp_good_pct,
ROUND(SAFE_DIVIDE(
SUM(IF(cls.start < 0.1, cls.density, 0)),
SUM(cls.density)
), 4) AS cls_good_pct
FROM
origins o
INNER JOIN `chrome-ux-report.all.202501` crux ON crux.origin = o.origin,
UNNEST(largest_contentful_paint.histogram.bin) AS lcp,
UNNEST(interaction_to_next_paint.histogram.bin) AS inp,
UNNEST(cumulative_layout_shift.histogram.bin) AS cls
WHERE
form_factor.name = 'phone'
GROUP BY o.origin, device
ORDER BY o.origin, device
Impact on SEO
Page Experience Ranking Signal
Core Web Vitals are part of Google's page experience ranking signals, alongside:
- HTTPS: The page is served over a secure connection
- Mobile-friendliness: The page is usable on mobile devices
- No intrusive interstitials: The page does not use disruptive popups
How CWV Affects Rankings
- CWV act as a tiebreaker signal. When content relevance and quality are similar between two pages, the one with better CWV may rank higher.
- CWV are not a dominant signal — they will not override strong content relevance or authoritative backlinks.
- Google assesses CWV using field data from CrUX at the 75th percentile. Lab scores do not directly affect rankings.
- Assessment is per-page and per-URL grouping. Pages without CrUX data fall back to origin-level data, and pages without any field data are not penalized.
Mobile vs Desktop Assessment
- Google Search uses mobile CWV for mobile search rankings and desktop CWV for desktop search rankings.
- Mobile performance is typically worse due to lower processing power, slower networks, and smaller viewports.
- Optimize for mobile first — it usually has the strictest constraints.
Optimization Patterns
Image Optimization
<!-- Modern formats with fallback -->
<picture>
<source srcset="hero.avif" type="image/avif">
<source srcset="hero.webp" type="image/webp">
<img src="hero.jpg" alt="Hero" width="1200" height="600"
fetchpriority="high" decoding="async">
</picture>
<!-- Responsive images -->
<img src="hero-800.webp" alt="Hero"
srcset="hero-400.webp 400w, hero-800.webp 800w, hero-1200.webp 1200w"
sizes="(max-width: 600px) 400px, (max-width: 900px) 800px, 1200px"
loading="lazy" decoding="async"
width="1200" height="600">
<!-- LCP image: DO NOT lazy load, DO set fetchpriority high -->
<img src="hero.webp" alt="Hero" fetchpriority="high" width="1200" height="600">
Font Optimization
<!-- Preload the primary font -->
<link rel="preload" as="font" type="font/woff2"
href="/fonts/main.woff2" crossorigin>
/* Use font-display to avoid blocking render */
@font-face {
font-family: 'Main Font';
src: url('/fonts/main.woff2') format('woff2');
font-display: swap;
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+2000-206F;
}
Subset fonts at build time to reduce file size:
# Using glyphanger to subset
glyphanger --whitelist="US-ASCII" --formats=woff2 --subset=main.ttf
JavaScript Optimization
<!-- Defer non-critical JS -->
<script defer src="/js/app.js"></script>
<!-- Async for independent scripts -->
<script async src="https://analytics.example.com/tag.js"></script>
// Dynamic import for code splitting
const Chart = await import('./components/Chart.js');
// Use requestIdleCallback for non-urgent work
requestIdleCallback(() => {
initializeAnalytics();
prefetchNextPageResources();
});
Tree-shake at build time by using ES module imports and ensuring your bundler (Webpack, Rollup, Vite) is configured for production mode.
CSS Optimization
<!-- Inline critical CSS -->
<style>
/* Only styles needed for above-the-fold content */
body { margin: 0; font-family: system-ui, sans-serif; }
.hero { min-height: 60vh; }
</style>
<!-- Defer non-critical CSS -->
<link rel="preload" href="/css/full.css" as="style"
onload="this.onload=null;this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="/css/full.css"></noscript>
/* Use contain to limit browser recalculation scope */
.card {
contain: layout style paint;
}
/* Use content-visibility for offscreen content */
.below-fold-section {
content-visibility: auto;
contain-intrinsic-size: 0 500px;
}
Third-Party Script Management
<!-- Load third-party scripts after page load -->
<script>
window.addEventListener('load', () => {
const script = document.createElement('script');
script.src = 'https://third-party.example.com/widget.js';
document.head.appendChild(script);
});
</script>
<!-- Use resource hints for required third-party origins -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="dns-prefetch" href="https://cdn.example.com">
For tag managers, use server-side tagging or limit client-side tags to reduce main-thread impact. Audit third-party scripts regularly with Chrome DevTools Network panel and the Coverage tab to find unused code.
Monitoring Setup: GA4 Custom Events + BigQuery
Step 1: Send CWV to GA4
import { onLCP, onINP, onCLS } from 'web-vitals/attribution';
function sendToGA4(metric) {
gtag('event', metric.name, {
value: Math.round(metric.name === 'CLS' ? metric.value * 1000 : metric.value),
event_category: 'Web Vitals',
event_label: metric.id,
metric_rating: metric.rating,
debug_target: metric.attribution?.interactionTarget
|| metric.attribution?.element
|| metric.attribution?.largestShiftTarget
|| '(not set)',
non_interaction: true,
});
}
onLCP(sendToGA4);
onINP(sendToGA4);
onCLS(sendToGA4);
Step 2: Query CWV from GA4 BigQuery Export
-- Daily CWV p75 from GA4 BigQuery export
WITH vitals AS (
SELECT
DATE(TIMESTAMP_MICROS(event_timestamp)) AS date,
event_name AS metric_name,
(SELECT value.int_value FROM UNNEST(event_params) WHERE key = 'value') AS metric_value,
(SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'metric_rating') AS rating,
(SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'debug_target') AS debug_target,
device.category AS device_type
FROM `project.analytics_123456789.events_*`
WHERE _TABLE_SUFFIX BETWEEN FORMAT_DATE('%Y%m%d', DATE_SUB(CURRENT_DATE(), INTERVAL 28 DAY))
AND FORMAT_DATE('%Y%m%d', CURRENT_DATE() - 1)
AND event_name IN ('LCP', 'INP', 'CLS')
)
SELECT
date,
metric_name,
device_type,
COUNT(*) AS samples,
APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] AS p75,
COUNTIF(rating = 'good') / COUNT(*) AS good_pct,
COUNTIF(rating = 'poor') / COUNT(*) AS poor_pct
FROM vitals
GROUP BY date, metric_name, device_type
ORDER BY date DESC, metric_name, device_type
Step 3: Identify Worst-Performing Pages
WITH page_vitals AS (
SELECT
(SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'page_location') AS page,
event_name AS metric_name,
(SELECT value.int_value FROM UNNEST(event_params) WHERE key = 'value') AS metric_value,
(SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'metric_rating') AS rating,
(SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'debug_target') AS debug_target
FROM `project.analytics_123456789.events_*`
WHERE _TABLE_SUFFIX BETWEEN FORMAT_DATE('%Y%m%d', DATE_SUB(CURRENT_DATE(), INTERVAL 28 DAY))
AND FORMAT_DATE('%Y%m%d', CURRENT_DATE() - 1)
AND event_name IN ('LCP', 'INP', 'CLS')
)
SELECT
REGEXP_EXTRACT(page, r'^https?://[^/]+(/.*)') AS path,
metric_name,
COUNT(*) AS samples,
APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] AS p75,
COUNTIF(rating = 'poor') / COUNT(*) AS poor_pct,
APPROX_TOP_COUNT(debug_target, 3) AS top_debug_targets
FROM page_vitals
WHERE page IS NOT NULL
GROUP BY path, metric_name
HAVING samples >= 50
ORDER BY poor_pct DESC
LIMIT 50
Step 4: Set Up Alerts
Create a scheduled query in BigQuery that runs daily and writes to a monitoring table. Alert when p75 values cross thresholds:
-- Insert daily CWV summary for monitoring
INSERT INTO `project.monitoring.cwv_daily_summary`
SELECT
CURRENT_DATE() AS report_date,
metric_name,
device_type,
COUNT(*) AS samples,
APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] AS p75,
COUNTIF(rating = 'good') / COUNT(*) AS good_pct,
CASE
WHEN metric_name = 'LCP' AND APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] > 4000 THEN 'POOR'
WHEN metric_name = 'LCP' AND APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] > 2500 THEN 'NEEDS_IMPROVEMENT'
WHEN metric_name = 'INP' AND APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] > 500 THEN 'POOR'
WHEN metric_name = 'INP' AND APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] > 200 THEN 'NEEDS_IMPROVEMENT'
WHEN metric_name = 'CLS' AND APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] > 250 THEN 'POOR'
WHEN metric_name = 'CLS' AND APPROX_QUANTILES(metric_value, 100)[OFFSET(75)] > 100 THEN 'NEEDS_IMPROVEMENT'
ELSE 'GOOD'
END AS status
FROM (
SELECT
event_name AS metric_name,
(SELECT value.int_value FROM UNNEST(event_params) WHERE key = 'value') AS metric_value,
(SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'metric_rating') AS rating,
device.category AS device_type
FROM `project.analytics_123456789.events_*`
WHERE _TABLE_SUFFIX = FORMAT_DATE('%Y%m%d', CURRENT_DATE() - 1)
AND event_name IN ('LCP', 'INP', 'CLS')
)
GROUP BY metric_name, device_type
Claude Code Skill
This reference is also available as a free Claude Code skill — use it directly in your terminal:
# Install
curl -sSL https://raw.githubusercontent.com/cognyai/claude-code-marketing-skills/main/install.sh | bash
# Use
/cwv-audit # Full CWV overview and audit guidance
/cwv-audit LCP # Deep-dive into LCP causes and fixes
/cwv-audit CrUX competitor # CrUX competitor comparison queries
Resources
- web.dev Core Web Vitals: web.dev/articles/vitals
- CrUX documentation: developer.chrome.com/docs/crux
- web-vitals library: github.com/GoogleChrome/web-vitals
- CrUX BigQuery cookbook: developer.chrome.com/docs/crux/bigquery
- PageSpeed Insights: pagespeed.web.dev
- Search Console CWV report: support.google.com/webmasters/answer/9205520
- Claude Code Marketing Skills: github.com/cognyai/claude-code-marketing-skills