logicentity.com

Service Workers as a Network Proxy Layer

The programmable network layer that sits between your application and the internet

Pradeep DavuluriFebruary 27, 202612 min read
Service Workers as a Network Proxy Layer
Client Infrastructure

Service Workers as a Network Proxy Layer

The programmable network layer that sits between your application and the internet, quietly intercepting every request, enabling offline support, controlling caching strategies, and fundamentally changing how web apps handle connectivity. Think of it as a man-in-the-middle attack, except you wrote it on purpose.

01

The Service Worker Mental Model

A service worker is a JavaScript file that the browser runs in a separate thread, independent of any page. It acts as a programmable network proxy: every HTTP request made by pages under its scope passes through the service worker's fetch event handler before reaching the network. The service worker can inspect the request, return a cached response, modify the request, fetch from the network, construct a synthetic response, or combine all of these strategies.

This is fundamentally different from the browser's built-in HTTP cache, which is a black box controlled by Cache-Control headers. The service worker gives you full programmatic control over caching decisions, fallback logic, and network behavior, per request, per URL pattern, per content type. You're not configuring a cache; you're writing the cache.

Service workers are origin-scoped and HTTPS-only (with a localhost exception for development). They don't have access to the DOM, window, or document. They communicate with pages via the postMessage API and the Clients interface. They persist across page loads and browser sessions. Once installed, a service worker continues running (and intercepting requests) until explicitly unregistered or replaced by a new version.

The proxy analogy Think of a service worker as a local proxy server running inside the browser. Like nginx or Varnish sitting in front of your application server, it intercepts inbound requests, can serve from cache, can forward to upstream, and can apply logic to decide which path to take. The difference: this proxy runs on the user's device, operates per-origin, and has zero network latency to the client.
02

Lifecycle: Install, Activate, Fetch

The service worker lifecycle is the most misunderstood aspect of the API, and getting it wrong causes the most painful production bugs. The lifecycle has three phases, and understanding when each fires is essential.

Registration. A page calls navigator.serviceWorker.register('/sw.js'). The browser downloads sw.js, parses it, and begins the installation process. Registration is idempotent: calling it multiple times with the same URL is a no-op if the service worker hasn't changed.

Install. The install event fires once, when a new service worker is detected (byte-different from the currently installed version). This is where you precache critical resources: the app shell, core JavaScript bundles, CSS, key images. The service worker enters a "waiting" state after install completes, but it does not take control of existing pages. It waits until all tabs controlled by the old service worker are closed.

Activate. The activate event fires when the service worker takes control, either after the waiting period (all old tabs closed) or immediately if skipWaiting() was called during install. This is where you clean up old caches from previous versions. After activation, the service worker begins intercepting fetch events for all pages in its scope.

// sw.js - complete lifecycle
const CACHE_VERSION = 'v3';
const PRECACHE_URLS = [
  '/',
  '/app.js',
  '/styles.css',
  '/offline.html'
];

// Phase 1: Install - precache critical resources
self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHE_VERSION).then(cache =>
      cache.addAll(PRECACHE_URLS)
    )
  );
});

// Phase 2: Activate - clean up old caches
self.addEventListener('activate', (event) => {
  event.waitUntil(
    caches.keys().then(keys =>
      Promise.all(
        keys
          .filter(key => key !== CACHE_VERSION)
          .map(key => caches.delete(key))
      )
    )
  );
  // Take control of all open tabs immediately
  self.clients.claim();
});
The skipWaiting footgun Calling self.skipWaiting() during install forces the new service worker to activate immediately, taking control of all open tabs, even those running with cached assets from the old version. This can cause version mismatches: the new service worker serves new assets while the page's JavaScript expects old ones. Use skipWaiting cautiously, and only when your assets are backwards-compatible or you trigger a full page reload after activation.
03

The Fetch Event: Your Programmable Proxy

The fetch event is the core of the service worker's proxy capability. Every network request from a controlled page (document navigations, script loads, image fetches, API calls, font downloads) fires a fetch event in the service worker. The handler receives the Request object and must call event.respondWith() with a Response (or a Promise<Response>). If the handler doesn't call respondWith, the request proceeds to the network normally.

// Phase 3: Fetch - intercept every request
self.addEventListener('fetch', (event) => {
  const url = new URL(event.request.url);

  // Route requests to different strategies
  if (event.request.mode === 'navigate') {
    event.respondWith(networkFirst(event.request));
  } else if (url.pathname.startsWith('/api/')) {
    event.respondWith(networkOnly(event.request));
  } else if (url.pathname.match(/\.(js|css|woff2)$/)) {
    event.respondWith(cacheFirst(event.request));
  } else if (url.pathname.match(/\.(png|jpg|webp|avif)$/)) {
    event.respondWith(staleWhileRevalidate(event.request));
  }
  // Unhandled requests fall through to the network
});

The power of this model is the routing. Different URL patterns can use different caching strategies, different timeout thresholds, and different fallback responses. You're writing a request router that runs on the client, identical in concept to an Express/Hono server, but executing in the browser with access to a persistent cache.

04

Caching Strategies: A Pattern Language

Five caching strategies cover virtually every use case. Each strategy is a function that accepts a Request and returns a Promise<Response>.

Cache First: serve from cache if available, fall back to network. Best for immutable assets: hashed JavaScript bundles (app.a1b2c3.js), CSS files, fonts, and images with content hashes. Once cached, these files never need to be re-fetched because any change produces a new URL.

async function cacheFirst(request) {
  const cached = await caches.match(request);
  if (cached) return cached;

  const response = await fetch(request);
  const cache = await caches.open('assets-v1');
  cache.put(request, response.clone());
  return response;
}

Network First: try the network, fall back to cache if offline or slow. Best for HTML documents and dynamic content where freshness matters but offline access is valuable. Add a timeout to avoid long waits on slow networks.

async function networkFirst(request, timeoutMs = 3000) {
  const cache = await caches.open('pages-v1');

  try {
    const response = await withTimeout(fetch(request), timeoutMs);
    cache.put(request, response.clone());
    return response;
  } catch {
    const cached = await cache.match(request);
    return cached ?? caches.match('/offline.html');
  }
}

Stale-While-Revalidate: serve from cache immediately (instant response), then fetch from the network in the background and update the cache for next time. Best for content where speed matters more than absolute freshness: images, avatars, non-critical API responses, CMS content. The user sees the cached version instantly; the next visit sees the updated version.

async function staleWhileRevalidate(request) {
  const cache = await caches.open('swr-v1');
  const cached = await cache.match(request);

  // Fire-and-forget: update cache in background
  const networkPromise = fetch(request).then(response => {
    cache.put(request, response.clone());
    return response;
  });

  return cached ?? networkPromise;
}

Network Only: always go to the network, never cache. For API mutations (POST, PUT, DELETE), analytics pings, and real-time data where stale responses would be harmful.

Cache Only: serve from cache, never touch the network. For precached assets during a controlled offline experience where you've explicitly populated the cache during install.

StrategySpeedFreshnessBest For
Cache FirstInstant (on hit)Stale until URL changesHashed assets (JS, CSS, fonts)
Network FirstNetwork speedAlways freshHTML documents, auth-gated data
Stale-While-RevalidateInstant (on hit)One visit behindImages, avatars, CMS content
Network OnlyNetwork speedAlways freshAPI mutations, analytics
Cache OnlyInstantNever updatesPrecached offline shell
05

Precaching vs. Runtime Caching

Precaching populates the cache during the service worker's install event, before any user interaction. You specify a list of URLs, and the service worker downloads and caches them all. Precaching guarantees that critical resources are available offline immediately after install. The trade-off: every precached resource adds to the install time and bandwidth cost. A 2MB precache payload takes 2–4 seconds on a 3G connection.

Runtime caching populates the cache on first access: when the user actually requests a resource, the service worker caches the response for subsequent requests. Runtime caching is lazy: it only caches what the user has visited. The trade-off: the first request for each resource is a cache miss (normal network request), and resources the user hasn't visited aren't available offline.

// Precaching: explicit list during install
// ✅ Guarantees offline availability for core app shell
// ⚠️ Costs bandwidth and install time upfront
const PRECACHE = [
  '/',                      // app shell HTML
  '/app.a1b2c3.js',       // core bundle
  '/styles.d4e5f6.css',   // main CSS
  '/offline.html'          // offline fallback
];

// Runtime caching: lazy, per-request caching
// ✅ Zero upfront cost, caches only what's used
// ⚠️ First visit for each resource is a cache miss
// Implemented inside the fetch event handler

The best strategy combines both: precache the critical app shell (HTML, framework JS, main CSS) during install, and runtime-cache everything else (images, API responses, non-critical pages) as the user navigates. This ensures the app boots offline while keeping the install payload small.

Practical takeaway Precache only what's needed to render the app shell: typically the root HTML, main JS bundle, and primary CSS file. Everything else should be runtime-cached. If your precache exceeds 500KB, you're likely precaching too much. Audit the list and move non-critical resources to runtime caching.
06

Cache Versioning and Update Propagation

Service worker updates are the most operationally challenging aspect of the technology. When you deploy a new version of your app, you need the service worker to update its cache, but the update process is asynchronous and non-blocking. Users can be running an old service worker for hours or days after a deployment.

The browser checks for a new service worker on every navigation (or every 24 hours, whichever comes first). If the sw.js file has changed (byte-level comparison), the browser installs the new version. But the new version enters "waiting" state: it doesn't activate until all tabs controlled by the old version are closed. This ensures running pages aren't disrupted by a cache wipe in the middle of a session.

// Prompt user to update when new SW is waiting
// main.js (application code)
async function registerSW() {
  const reg = await navigator.serviceWorker.register('/sw.js');

  reg.addEventListener('updatefound', () => {
    const newSW = reg.installing;
    newSW.addEventListener('statechange', () => {
      if (newSW.state === 'installed' && navigator.serviceWorker.controller) {
        // New version is waiting - show update UI
        showUpdateBanner({
          onAccept() {
            newSW.postMessage({ type: 'SKIP_WAITING' });
          }
        });
      }
    });
  });

  // Reload when new SW takes control
  navigator.serviceWorker.addEventListener('controllerchange', () => {
    window.location.reload();
  });
}
The stale deployment problem If your service worker precaches /app.js without a content hash, and you deploy a new version of app.js, users with the old service worker continue receiving the old file from cache indefinitely. Always use content-hashed filenames for precached assets (app.a1b2c3.js), and update the precache list in the service worker when the hashes change. Build tools like Workbox automate this with a precache manifest.
07

Beyond Caching: Background Sync and Push

Service workers enable two powerful capabilities beyond caching: Background Sync and Push Notifications.

Background Sync allows the service worker to defer an action until the user has connectivity. When the user submits a form or makes an API call while offline, the application queues the request. When connectivity returns (even if the user has closed the tab), the service worker fires a sync event and processes the queued requests.

// Application code: queue failed requests for background sync
async function submitForm(data) {
  try {
    await fetch('/api/submit', { method: 'POST', body: JSON.stringify(data) });
  } catch {
    // Offline - queue for background sync
    await saveToIndexedDB('outbox', data);
    const reg = await navigator.serviceWorker.ready;
    await reg.sync.register('submit-outbox');
  }
}

// sw.js: process queue when connectivity returns
self.addEventListener('sync', (event) => {
  if (event.tag === 'submit-outbox') {
    event.waitUntil(processOutbox());
  }
});

Push Notifications allow the server to wake up the service worker and deliver a message even when no pages are open. The service worker receives a push event, processes the payload, and displays a notification using the Notifications API. This is the mechanism behind all web push notification systems: the service worker is the receiving agent that runs independently of the page lifecycle.

The Periodic Background Sync API Beyond one-time sync, the Periodic Background Sync API (Chrome 80+) allows the service worker to execute at regular intervals (checking for updates, syncing data, refreshing cached content), even when the user isn't actively using the app. The browser controls the frequency based on site engagement (more engaged sites get more frequent syncs), preventing abuse.
08

Debugging, Pitfalls, and Production Hygiene

Service worker bugs are uniquely painful because they persist across page loads and browser sessions. A broken service worker can serve stale or broken content to every user, and fixing it requires the user to receive and install the updated service worker, which itself requires a network request that the broken service worker might intercept.

Essential Debugging Tools

Chrome DevTools → Application → Service Workers shows the registration status, active and waiting workers, and provides controls to update, unregister, and bypass for network. The "Update on reload" checkbox forces the browser to install and activate the new service worker on every reload, skipping the waiting phase, essential during development. The Cache Storage panel shows all cache buckets and their contents.

Production Safety Patterns

Kill switch: Always include a mechanism to unregister the service worker remotely. The simplest approach: if the service worker detects a specific response from the server (a header, a JSON flag, a specific status code), it unregisters itself. This gives you an emergency escape hatch if a broken service worker ships to production.

// Kill switch: check a server flag on every navigation
self.addEventListener('fetch', (event) => {
  if (event.request.mode === 'navigate') {
    event.respondWith(
      fetch(event.request).then(response => {
        if (response.headers.get('X-SW-Kill') === 'true') {
          self.registration.unregister();
        }
        return response;
      }).catch(() => caches.match(event.request))
    );
  }
});

Scope control: Register your service worker at the narrowest scope that covers your application. A service worker registered at / intercepts every request from your origin, including third-party scripts, analytics calls, and assets you don't control. If your app lives at /app/, register there. If you need broader scope, be explicit about which requests the fetch handler processes and let everything else fall through to the network.

Workbox: Google's Workbox library abstracts the common patterns (precaching with revision management, runtime caching strategies, cache expiration, background sync) into a declarative configuration. For production applications, Workbox eliminates most of the hand-rolled service worker code and its associated bugs. Use it unless you have a specific reason not to.

The nuclear option If a catastrophically broken service worker ships and the kill switch doesn't work (because the service worker intercepts the response carrying the kill signal), the last resort is deploying a "no-op" service worker at the same URL: one that contains only self.skipWaiting() in the install handler and an empty fetch handler. The browser will detect the byte change, install the no-op worker, and the broken behavior stops. Always have this no-op script ready as a deployment artifact.