Page Speed Optimization: Complete Guide

· 16 min read

Page speed is no longer a nice-to-have — it is a direct ranking factor in Google's algorithm and one of the most impactful levers you can pull for both SEO and user experience. A one-second delay in page load time can reduce conversions by 7%, increase bounce rates by 11%, and decrease page views by 11%, according to research from Akamai and Google. In 2026, with Core Web Vitals firmly embedded in Google's ranking systems, optimizing page speed is essential for any site that wants to compete in organic search.

This guide covers everything you need to know about page speed optimization — from understanding the metrics Google cares about to implementing specific technical fixes that deliver measurable improvements. Whether you are working on a small blog or a large e-commerce platform, these techniques will help you build faster, more performant websites. Use our Page Speed Checker to benchmark your current performance before diving in.

1. Why Page Speed Matters for SEO

Google has used page speed as a ranking signal since 2010 for desktop and since 2018 for mobile (the "Speed Update"). In 2021, the Page Experience update made Core Web Vitals an official ranking factor. By 2026, page experience signals — including speed — are deeply integrated into how Google evaluates and ranks pages.

The data is clear. Google's own research shows that as page load time increases from 1 second to 3 seconds, the probability of bounce increases by 32%. When load time goes from 1 second to 5 seconds, bounce probability increases by 90%. For e-commerce sites, Amazon found that every 100ms of latency cost them 1% in sales. Walmart reported that for every 1-second improvement in page load time, conversions increased by 2%.

Beyond rankings, page speed affects crawl budget. Googlebot has a finite amount of time to spend on your site. Faster pages mean Google can crawl more of your content in the same time window, which is critical for large sites with thousands of pages. You can check your overall page experience with our Page Experience Checker.

Speed also impacts Core Web Vitals scores directly, which appear in Google Search Console and influence your eligibility for rich results and top stories placement. Sites that pass all three Core Web Vitals thresholds get a ranking boost — it is small but real, and in competitive niches, every edge matters.

2. Understanding Core Web Vitals

Core Web Vitals are three specific metrics that Google uses to measure real-world user experience on your pages. As of 2024, Google replaced First Input Delay (FID) with Interaction to Next Paint (INP), making the current set of Core Web Vitals: LCP, INP, and CLS. Test yours with our Core Web Vitals Checker.

Largest Contentful Paint (LCP)

LCP measures how long it takes for the largest visible content element to render on screen. This is typically a hero image, a large text block, or a video poster. LCP reflects the user's perception of when the page's main content has loaded.

Common causes of poor LCP include slow server response times, render-blocking CSS and JavaScript, slow resource load times for images or fonts, and client-side rendering that delays content visibility.

Interaction to Next Paint (INP)

INP replaced FID in March 2024 and measures the overall responsiveness of a page to user interactions throughout its entire lifecycle — not just the first interaction. INP observes the latency of all click, tap, and keyboard interactions and reports a value that nearly all interactions were below.

Poor INP is usually caused by long JavaScript tasks that block the main thread, excessive DOM size, heavy event handlers, and layout thrashing from JavaScript that reads and writes DOM properties in a loop.

Cumulative Layout Shift (CLS)

CLS measures visual stability — how much the page layout shifts unexpectedly during loading. Every time a visible element changes position without user interaction, that counts as a layout shift. CLS sums the largest burst of layout shift scores.

Layout shifts are commonly caused by images without dimensions, dynamically injected content (ads, embeds), web fonts causing FOIT/FOUT, and late-loading CSS that repositions elements.

3. How to Measure Page Speed

Before you optimize, you need accurate measurements. There are two categories of performance data: lab data (synthetic tests run in controlled environments) and field data (real user measurements collected from actual visitors). Both are valuable, but Google uses field data from the Chrome User Experience Report (CrUX) for ranking purposes. Measure your load time with our Page Load Timer.

PageSpeed Insights

Google's PageSpeed Insights (PSI) is the go-to tool for most developers. It provides both lab data (powered by Lighthouse) and field data (from CrUX). PSI gives you a score from 0 to 100 and specific recommendations for improvement. The field data section shows your actual Core Web Vitals as experienced by real Chrome users over the past 28 days.

Lighthouse

Lighthouse runs in Chrome DevTools (Audits tab), as a CLI tool, or as a Node module. It simulates a mid-tier mobile device on a throttled 4G connection and provides detailed performance audits. Run it from the command line for CI integration:

npx lighthouse https://example.com --output=json --output-path=./report.json --chrome-flags="--headless"

WebPageTest

WebPageTest offers the most detailed waterfall analysis available. You can test from multiple locations worldwide, on real devices, with various connection speeds. The filmstrip view shows exactly what users see at each point during page load. The "Opportunities & Experiments" feature can even test optimizations before you implement them.

Chrome DevTools Performance Panel

For deep debugging, the Performance panel in Chrome DevTools records a timeline of everything that happens during page load. You can see exactly which scripts block rendering, which layout shifts occur, and where long tasks are consuming the main thread. Use the Coverage tab to find unused CSS and JavaScript.

Field Data vs Lab Data

Lab data is reproducible and great for debugging, but it does not capture the full range of real-world conditions. Field data reflects actual user experience across different devices, networks, and geographic locations. Always prioritize field data for understanding your true performance, and use lab data for diagnosing specific issues. Check your page size and resource breakdown with our Page Size Analyzer.

4. Image Optimization

Images typically account for 40-60% of a page's total weight. Optimizing images is often the single highest-impact change you can make for page speed. Use our Image SEO Checker to audit your image optimization.

Modern Image Formats: WebP and AVIF

WebP provides 25-35% smaller file sizes compared to JPEG at equivalent quality. AVIF goes further, offering 50% savings over JPEG in many cases. Both formats support transparency (replacing PNG) and animation (replacing GIF). Use the <picture> element for format fallbacks:

<picture>
  <source srcset="hero.avif" type="image/avif">
  <source srcset="hero.webp" type="image/webp">
  <img src="hero.jpg" alt="Hero image description"
       width="1200" height="600" loading="eager"
       fetchpriority="high">
</picture>

Responsive Images with srcset

Serving a 2000px-wide image to a 375px-wide mobile screen wastes bandwidth. Use srcset and sizes to let the browser choose the right image size:

<img src="product-800.webp"
     srcset="product-400.webp 400w,
             product-800.webp 800w,
             product-1200.webp 1200w,
             product-1600.webp 1600w"
     sizes="(max-width: 600px) 100vw,
            (max-width: 1200px) 50vw,
            800px"
     alt="Product photo"
     width="800" height="600"
     loading="lazy">

Lazy Loading

Native lazy loading with loading="lazy" defers off-screen images until the user scrolls near them. This is supported in all modern browsers and requires zero JavaScript. Critical above-the-fold images should use loading="eager" (the default) and fetchpriority="high" to ensure they load as fast as possible.

Image Compression

Always compress images before serving them. Tools like Sharp (Node.js), Squoosh, or ImageMagick can automate this in your build pipeline. A practical build script:

# Convert and compress images with Sharp CLI
npx sharp-cli --input ./src/images/*.{jpg,png} \
  --output ./dist/images/ \
  --format webp \
  --quality 80 \
  --resize 1600

For LCP images specifically, always set explicit width and height attributes to prevent layout shifts, use fetchpriority="high", and consider inlining a low-quality placeholder (LQIP) as a base64 data URI while the full image loads.

5. CSS and JavaScript Optimization

Render-blocking resources are one of the most common causes of slow page loads. The browser cannot render content until it has parsed all blocking CSS and executed all blocking JavaScript in the <head>.

Critical CSS

Extract the CSS needed to render above-the-fold content and inline it directly in the <head>. Load the remaining CSS asynchronously. This eliminates the render-blocking CSS request for initial paint:

<head>
  <!-- Inline critical CSS -->
  <style>
    /* Only styles needed for above-the-fold content */
    body { margin: 0; font-family: system-ui, sans-serif; }
    .header { background: #1a1a2e; color: #fff; padding: 1rem; }
    .hero { padding: 4rem 2rem; text-align: center; }
    .hero h1 { font-size: 2.5rem; margin: 0 0 1rem; }
  </style>
  <!-- Load full CSS asynchronously -->
  <link rel="preload" href="/css/style.css" as="style"
        onload="this.onload=null;this.rel='stylesheet'">
  <noscript><link rel="stylesheet" href="/css/style.css"></noscript>
</head>

Tools like critical (npm package) or PurgeCSS can automate critical CSS extraction and unused CSS removal.

JavaScript: defer, async, and Code Splitting

Scripts in the <head> without defer or async block HTML parsing. Use defer for scripts that need the DOM (they execute after parsing, in order). Use async for independent scripts like analytics (they execute as soon as they download, in any order):

<!-- Blocks parsing - avoid this -->
<script src="/js/app.js"></script>

<!-- Deferred - executes after HTML parsing, in order -->
<script defer src="/js/app.js"></script>

<!-- Async - executes when ready, no guaranteed order -->
<script async src="/js/analytics.js"></script>

Tree Shaking and Minification

Modern bundlers like Webpack, Rollup, and esbuild can eliminate unused code (tree shaking) and minify the output. A typical esbuild configuration for production:

// esbuild.config.js
import { build } from 'esbuild';

await build({
  entryPoints: ['src/index.js'],
  bundle: true,
  minify: true,
  treeShaking: true,
  splitting: true,
  format: 'esm',
  outdir: 'dist',
  target: ['es2020'],
  sourcemap: true,
});

Code splitting breaks your JavaScript into smaller chunks that load on demand. Route-based splitting is the most common approach — each page only loads the JavaScript it needs. This can reduce initial bundle size by 60-80% on large applications.

6. Server-Side Optimization

Your server response time (Time to First Byte, or TTFB) sets the floor for every other performance metric. If your server takes 2 seconds to respond, your LCP cannot possibly be under 2.5 seconds. Check your server headers with our HTTP Header Checker.

Reducing TTFB

A good TTFB is under 200ms for cached content and under 600ms for dynamic content. Common causes of high TTFB include slow database queries, unoptimized application code, insufficient server resources, and lack of server-side caching. Profile your backend to find bottlenecks — tools like New Relic, Datadog, or even simple server-side timing headers can pinpoint slow operations.

HTTP/2 and HTTP/3

HTTP/2 enables multiplexing (multiple requests over a single connection), header compression, and server push. HTTP/3 uses QUIC (UDP-based) for even faster connection establishment and better performance on lossy networks. Enable HTTP/2 in Nginx:

# /etc/nginx/conf.d/site.conf
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Enable gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_types text/plain text/css application/json
               application/javascript text/xml application/xml
               image/svg+xml font/woff2;

    # Enable Brotli compression (if module installed)
    brotli on;
    brotli_comp_level 6;
    brotli_types text/plain text/css application/json
                 application/javascript text/xml application/xml
                 image/svg+xml font/woff2;

    location / {
        root /var/www/html;
        index index.html;
        try_files $uri $uri/ =404;
    }
}

Compression: Gzip and Brotli

Brotli compression typically achieves 15-25% better compression ratios than gzip for text-based assets (HTML, CSS, JS, SVG). Most modern browsers support Brotli. Use Brotli for static assets (pre-compressed at build time) and gzip as a fallback. Pre-compress static files during your build process for the best compression ratios without runtime CPU cost:

# Pre-compress static assets at build time
find ./dist -type f \( -name "*.html" -o -name "*.css" -o -name "*.js" -o -name "*.svg" \) \
  -exec brotli -q 11 {} \; \
  -exec gzip -9 -k {} \;

Server-Side Rendering and Static Generation

Client-side rendered (CSR) applications send an empty HTML shell and rely on JavaScript to build the page. This is terrible for LCP because the browser must download, parse, and execute JavaScript before any content appears. Server-side rendering (SSR) or static site generation (SSG) sends fully rendered HTML, dramatically improving LCP and time-to-interactive.

7. Caching Strategies

Effective caching can eliminate network requests entirely for returning visitors, making subsequent page loads nearly instant. A well-implemented caching strategy is one of the most powerful performance optimizations available.

Browser Caching with Cache-Control Headers

Set appropriate Cache-Control headers for different resource types. Static assets with hashed filenames can be cached aggressively, while HTML should have shorter cache times or use revalidation:

# Nginx caching configuration
# HTML - short cache with revalidation
location ~* \.html$ {
    add_header Cache-Control "public, max-age=300, must-revalidate";
    add_header ETag "";
}

# CSS/JS with content hash in filename - cache for 1 year
location ~* \.(css|js)$ {
    add_header Cache-Control "public, max-age=31536000, immutable";
}

# Images - cache for 30 days
location ~* \.(jpg|jpeg|png|gif|webp|avif|svg|ico)$ {
    add_header Cache-Control "public, max-age=2592000";
}

# Fonts - cache for 1 year
location ~* \.(woff|woff2|ttf|eot)$ {
    add_header Cache-Control "public, max-age=31536000, immutable";
    add_header Access-Control-Allow-Origin "*";
}

CDN (Content Delivery Network)

A CDN caches your content on edge servers distributed globally, reducing latency by serving content from the server closest to the user. For a site with global traffic, a CDN can reduce TTFB by 50-80%. Popular options include Cloudflare, Fastly, AWS CloudFront, and Bunny CDN. At minimum, serve static assets through a CDN. For dynamic content, consider edge computing solutions like Cloudflare Workers or Vercel Edge Functions.

Service Workers for Offline Caching

Service workers can cache critical resources and serve them from the local cache, enabling instant page loads for returning visitors and even offline access. A basic caching service worker:

// sw.js - Cache-first strategy for static assets
const CACHE_NAME = 'v1';
const STATIC_ASSETS = [
  '/',
  '/css/style.css',
  '/js/app.js',
  '/fonts/inter-var.woff2'
];

self.addEventListener('install', event => {
  event.waitUntil(
    caches.open(CACHE_NAME)
      .then(cache => cache.addAll(STATIC_ASSETS))
  );
});

self.addEventListener('fetch', event => {
  if (event.request.destination === 'image' ||
      event.request.url.includes('/css/') ||
      event.request.url.includes('/js/')) {
    event.respondWith(
      caches.match(event.request)
        .then(cached => cached || fetch(event.request)
          .then(response => {
            const clone = response.clone();
            caches.open(CACHE_NAME)
              .then(cache => cache.put(event.request, clone));
            return response;
          }))
    );
  }
});

Check Your Page Speed Now

Get a detailed performance analysis with actionable recommendations to improve your Core Web Vitals and page load times.

Analyze Your Page Speed →

8. Font Optimization

Web fonts are a common source of both LCP delays and CLS issues. An unoptimized font loading strategy can add 1-3 seconds to your page load time and cause visible text flashes that frustrate users.

font-display: swap

The font-display descriptor controls how a font is displayed while it is loading. Using swap tells the browser to immediately show text in a fallback font, then swap to the web font once it loads. This prevents invisible text (FOIT) and ensures content is readable immediately:

@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter-var.woff2') format('woff2');
  font-weight: 100 900;
  font-style: normal;
  font-display: swap;
}

For Google Fonts, add the &display=swap parameter to the URL, which is already included by default in newer embed codes.

Preloading Critical Fonts

Preload the fonts used for above-the-fold content so the browser starts downloading them early, before it discovers them in the CSS:

<link rel="preload" href="/fonts/inter-var.woff2"
      as="font" type="font/woff2" crossorigin>

Only preload 1-2 critical font files. Preloading too many fonts wastes bandwidth and can actually slow down more important resources.

Font Subsetting

Most websites only use a fraction of the characters in a font file. Subsetting removes unused glyphs, dramatically reducing file size. A full Inter font file might be 300KB; a Latin-only subset can be under 20KB. Use tools like glyphhanger or pyftsubset to create subsets:

# Create a Latin-only subset with pyftsubset
pyftsubset Inter-Regular.ttf \
  --output-file=Inter-Regular-latin.woff2 \
  --flavor=woff2 \
  --layout-features="kern,liga,calt" \
  --unicodes="U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD"

Variable Fonts

Variable fonts contain multiple weights and styles in a single file, replacing the need to load separate files for regular, bold, italic, etc. A single variable font file is typically smaller than two or three static font files combined. This reduces HTTP requests and total font payload.

Reducing CLS from Font Loading

To minimize layout shift when fonts swap, match your fallback font metrics to your web font as closely as possible. The CSS size-adjust, ascent-override, descent-override, and line-gap-override descriptors let you fine-tune fallback font metrics:

@font-face {
  font-family: 'Inter Fallback';
  src: local('Arial');
  size-adjust: 107%;
  ascent-override: 90%;
  descent-override: 22%;
  line-gap-override: 0%;
}

body {
  font-family: 'Inter', 'Inter Fallback', system-ui, sans-serif;
}

9. Third-Party Script Management

Third-party scripts — analytics, ads, chat widgets, social embeds, A/B testing tools — are often the biggest performance killers on modern websites. A single poorly loaded third-party script can add seconds to your page load time and tank your INP score.

Audit Your Third-Party Scripts

Start by inventorying every third-party script on your site. Use Chrome DevTools Network panel filtered to "3rd-party" or the Lighthouse "Reduce the impact of third-party code" audit. For each script, ask: Is this still needed? What is its performance cost? Can it be loaded more efficiently? Run a comprehensive audit with our SEO Audit Tool.

Async Loading Patterns

Never load third-party scripts synchronously. At minimum, use async or defer. For scripts that are not needed immediately (chat widgets, social buttons), delay loading until user interaction or after the page has fully loaded:

// Load chat widget only after user interaction
let chatLoaded = false;
function loadChat() {
  if (chatLoaded) return;
  chatLoaded = true;
  const script = document.createElement('script');
  script.src = 'https://chat-provider.com/widget.js';
  document.body.appendChild(script);
}

// Trigger on first user interaction
['click', 'scroll', 'mousemove', 'touchstart'].forEach(event => {
  document.addEventListener(event, loadChat, { once: true });
});

// Or after a delay as fallback
setTimeout(loadChat, 5000);

The Facade Pattern

For heavy embeds like YouTube videos, use a facade — a lightweight placeholder that looks like the embed but only loads the actual iframe when the user clicks. This can save 500KB+ per YouTube embed. A simple YouTube facade:

<div class="youtube-facade" onclick="this.innerHTML='<iframe src="https://www.youtube-nocookie.com/embed/VIDEO_ID?autoplay=1" frameborder="0" allow="autoplay" allowfullscreen style="width:100%;height:100%;position:absolute;top:0;left:0"></iframe>'" style="position:relative;padding-bottom:56.25%;background:#000;cursor:pointer">
  <img src="/images/video-thumb.webp" alt="Video title"
       style="width:100%;height:100%;object-fit:cover;position:absolute;top:0;left:0" loading="lazy">
  <svg style="position:absolute;top:50%;left:50%;transform:translate(-50%,-50%)" width="68" height="48" viewBox="0 0 68 48">
    <path d="M66.52 7.74c-.78-2.93-2.49-5.41-5.42-6.19C55.79.13 34 0 34 0S12.21.13 6.9 1.55C3.97 2.33 2.27 4.81 1.48 7.74.06 13.05 0 24 0 24s.06 10.95 1.48 16.26c.78 2.93 2.49 5.41 5.42 6.19C12.21 47.87 34 48 34 48s21.79-.13 27.1-1.55c2.93-.78 4.64-3.26 5.42-6.19C67.94 34.95 68 24 68 24s-.06-10.95-1.48-16.26z" fill="red"/>
    <path d="M45 24L27 14v20" fill="#fff"/>
  </svg>
</div>

Tag Manager Best Practices

Google Tag Manager (GTM) itself is relatively lightweight, but the tags loaded through it can be devastating for performance. Set up trigger conditions so tags only fire when needed, use the "Custom HTML" tag type sparingly, and regularly audit your GTM container for unused or redundant tags. Consider using server-side GTM to move processing off the client.

10. Advanced Techniques

Once you have covered the fundamentals, these advanced resource hint and prioritization techniques can squeeze out additional performance gains.

Resource Hints: preconnect, prefetch, preload

Resource hints tell the browser about resources it will need soon, allowing it to start fetching them earlier in the page load process:

<!-- Preconnect to critical third-party origins -->
<link rel="preconnect" href="https://fonts.googleapis.com" crossorigin>
<link rel="preconnect" href="https://cdn.example.com">

<!-- DNS prefetch for less critical origins -->
<link rel="dns-prefetch" href="https://analytics.example.com">

<!-- Preload critical resources -->
<link rel="preload" href="/fonts/inter-var.woff2" as="font"
      type="font/woff2" crossorigin>
<link rel="preload" href="/images/hero.avif" as="image"
      type="image/avif">

<!-- Prefetch next page resources -->
<link rel="prefetch" href="/about/">

Priority Hints (fetchpriority)

The fetchpriority attribute lets you signal the relative importance of resources to the browser. This is particularly useful for LCP images that might otherwise be deprioritized:

<!-- High priority for LCP image -->
<img src="hero.webp" fetchpriority="high" alt="Hero">

<!-- Low priority for below-fold images -->
<img src="footer-logo.webp" fetchpriority="low" loading="lazy" alt="Logo">

<!-- High priority for critical script -->
<script src="/js/critical.js" fetchpriority="high"></script>

Speculation Rules API

The Speculation Rules API (supported in Chrome) allows you to prerender entire pages that the user is likely to navigate to, making the next page load appear instant:

<script type="speculationrules">
{
  "prerender": [
    {
      "where": {
        "and": [
          {"href_matches": "/*"},
          {"not": {"href_matches": "/logout"}}
        ]
      },
      "eagerness": "moderate"
    }
  ]
}
</script>

Use this carefully — prerendering consumes bandwidth and memory. The "eagerness": "moderate" setting only prerenders when the user hovers over a link, which is a good balance between performance and resource usage.

11. Mobile Page Speed

Mobile devices face unique performance challenges: slower processors, less memory, unreliable network connections, and smaller screens. Google uses mobile-first indexing, meaning the mobile version of your site is what Google evaluates for rankings. Check your mobile readiness with our Mobile-Friendly Checker.

Mobile-Specific Optimizations

Mobile optimization goes beyond responsive design. Key considerations include:

Responsive Images for Mobile

Serving appropriately sized images is even more critical on mobile. A 2000px hero image on a 375px screen wastes 80%+ of the downloaded bytes. Use srcset with width descriptors and the sizes attribute to ensure mobile devices download smaller images:

<img src="hero-800.webp"
     srcset="hero-400.webp 400w,
             hero-600.webp 600w,
             hero-800.webp 800w,
             hero-1200.webp 1200w"
     sizes="100vw"
     alt="Hero image"
     width="1200" height="600"
     fetchpriority="high">

AMP Alternatives

Google no longer requires AMP for Top Stories or other special search features. Instead of maintaining a separate AMP version of your site, focus on making your regular pages fast. A well-optimized standard page will perform as well as or better than AMP, without the development overhead and content restrictions. The key metrics Google cares about are Core Web Vitals — meet those thresholds and you get the same benefits AMP used to provide exclusively.

12. Monitoring and Continuous Improvement

Page speed optimization is not a one-time project. New content, updated dependencies, added features, and third-party script changes can all degrade performance over time. Continuous monitoring is essential to maintain and improve your speed gains.

Chrome User Experience Report (CrUX)

CrUX is the dataset Google uses for ranking purposes. It collects real-world performance data from Chrome users who have opted in to usage statistics. You can access CrUX data through the CrUX API, BigQuery, or PageSpeed Insights. CrUX data is aggregated over a rolling 28-day window and is available at both origin and URL level.

# Query CrUX API for a specific URL
curl "https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com/",
    "metrics": ["largest_contentful_paint", "interaction_to_next_paint", "cumulative_layout_shift"]
  }'

Google Search Console Core Web Vitals Report

The Core Web Vitals report in Google Search Console groups your URLs into "Good," "Needs Improvement," and "Poor" categories based on CrUX data. It highlights specific issues and affected URL groups, making it easy to prioritize fixes. Check this report weekly and set up email alerts for regressions.

Real User Monitoring (RUM)

RUM tools collect performance data from every page view on your site, giving you much more granular data than CrUX. You can segment by device type, browser, geographic location, page type, and more. Popular RUM solutions include SpeedCurve, Calibre, and the free web-vitals JavaScript library:

// Collect Core Web Vitals with the web-vitals library
import { onLCP, onINP, onCLS } from 'web-vitals';

function sendToAnalytics(metric) {
  const body = JSON.stringify({
    name: metric.name,
    value: metric.value,
    rating: metric.rating,
    delta: metric.delta,
    id: metric.id,
    navigationType: metric.navigationType,
    url: window.location.href,
  });

  // Use sendBeacon for reliable delivery
  if (navigator.sendBeacon) {
    navigator.sendBeacon('/api/vitals', body);
  } else {
    fetch('/api/vitals', { body, method: 'POST', keepalive: true });
  }
}

onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);

Performance Budgets

Set performance budgets to prevent regressions. A performance budget defines limits for metrics like total page weight, JavaScript size, number of requests, or specific Web Vitals thresholds. Integrate budget checks into your CI/CD pipeline so builds fail if they exceed the budget:

// budget.json for Lighthouse CI
[
  {
    "path": "/*",
    "timings": [
      { "metric": "interactive", "budget": 3000 },
      { "metric": "first-contentful-paint", "budget": 1500 }
    ],
    "resourceSizes": [
      { "resourceType": "script", "budget": 200 },
      { "resourceType": "stylesheet", "budget": 50 },
      { "resourceType": "image", "budget": 300 },
      { "resourceType": "total", "budget": 600 }
    ],
    "resourceCounts": [
      { "resourceType": "third-party", "budget": 5 }
    ]
  }
]

Also consider the environmental impact of your page weight. Lighter pages mean less energy consumed per page view. Check your site's carbon footprint with our Website Carbon Checker.

13. Common Page Speed Mistakes

Even experienced developers make these mistakes. Avoid them to prevent undoing your optimization work:

  1. Not setting image dimensions: Every <img> tag should have explicit width and height attributes. Without them, the browser cannot reserve space before the image loads, causing layout shifts (CLS).
  2. Loading all JavaScript upfront: If your page loads 2MB of JavaScript before the user can interact, your INP will suffer. Code-split aggressively and defer non-critical scripts.
  3. Ignoring TTFB: No amount of frontend optimization can compensate for a server that takes 3 seconds to respond. Profile your backend and fix slow database queries, missing indexes, and inefficient application code.
  4. Over-preloading resources: Preloading too many resources competes for bandwidth with truly critical resources. Limit preloads to 2-3 critical assets (LCP image, primary font, critical CSS).
  5. Not testing on real mobile devices: Desktop Chrome with throttling enabled does not accurately simulate real mobile performance. The CPU throttling in particular is unreliable. Test on actual mid-range Android devices.
  6. Forgetting about web fonts: An unoptimized font loading strategy can add 1-3 seconds to LCP and cause significant CLS. Always use font-display: swap, preload critical fonts, and subset your font files.
  7. Caching static assets without versioning: If you set long cache times on CSS and JS files without content hashing in the filename, users will get stale versions after you deploy updates. Use content-based hashing (e.g., style.a1b2c3.css) with immutable cache headers.
  8. Adding third-party scripts without measuring impact: Every new script should be evaluated for its performance cost. A "lightweight" chat widget might add 500KB of JavaScript and 10 additional network requests.
  9. Optimizing only for lab data: A perfect Lighthouse score means nothing if your field data (CrUX) shows poor performance. Real users on slow devices and networks are what matter for rankings.
  10. Not monitoring after optimization: Performance degrades over time as new features, content, and dependencies are added. Set up continuous monitoring and performance budgets to catch regressions early.

14. Key Takeaways

Page speed optimization is a continuous process that directly impacts your search rankings, user experience, and conversion rates. Here are the most important points to remember:

Start by measuring your current performance with our Page Speed Checker and Core Web Vitals Checker. Identify your biggest bottlenecks, implement the fixes described in this guide, and monitor the results. Even small improvements compound — a 10% reduction in load time across every page on your site can meaningfully impact your organic traffic and conversion rates.

Remember: the goal is not a perfect Lighthouse score. The goal is fast, reliable, visually stable pages that serve your users well — and that is exactly what Google rewards in its rankings.

Related Articles