Mastering Lighthouse: How to Use, Configure, and Interpret Web Performance Audits

This article explains what Lighthouse is, how to run it via Chrome extensions, DevTools, CLI or Node.js, details its internal architecture, driver and pass configuration, gatherer and trace processing, and shows how audits generate performance reports with scoring based on HTTP Archive data.

ELab Team
ELab Team
ELab Team
Mastering Lighthouse: How to Use, Configure, and Interpret Web Performance Audits

What is Lighthouse

Lighthouse analyzes web apps and web pages, collecting modern performance metrics and insights on developer best practices.

Usage Methods

Chrome browser extension – provides a friendly UI for reading reports.

Chrome DevTools – built into the latest Chrome, no installation required.

Lighthouse CLI – convenient for CI integration.

Node.js module – import the Lighthouse package directly in code.

Architecture Overview

Gathering

Driver

The driver communicates via the Chrome Debugging Protocol (CDP) and Puppeteer to control a headless browser.

Chrome Debugging Protocol (CDP)

CDP allows tools to inspect, debug, and analyze Chromium‑based browsers. In extensions, the chrome.debugger API uses WebSocket to establish a connection.

Instrumentation is divided into Domains (DOM, Debugger, Network, etc.), each exposing a set of commands and events as JSON objects.

CDP Domains – red indicates experimental.

Domain methods must be enable() before emitting events. Example:

// will NOT work
driver.defaultSession.sendCommand('Security.enable').then(_=>{
  driver.defaultSession.on('Security.securityStateChanged', state=>{/* ... */});
});

// WILL work! happy happy. :)
driver.defaultSession.on('Security.securityStateChanged', state=>{/* ... */}); // event binding is synchronous
driver.defaultSession.sendCommand('Security.enable');

Pass Configuration

The passes array controls which URLs are loaded and what information is collected. Each pass defines settings such as load timeout, trace recording, and the list of gatherer s that produce artifacts for later audits.

{
    passes: [{
        passName: 'fastPass',
        atherers: ['fast-gatherer'],
    },
    {
        passName: 'slowPass',
        recordTrace: true,
        useThrottling: true,
        networkQuietThresholdMs: 5000,
        gatherers: ['slow-gatherer'],
    }]
}

Gatherers

Gatherers decide what data to collect during page load and output it as artifacts. Running with --gather-mode produces three main outputs: artifacts.json: combined output of all gatherers. defaultPass.trace.json: performance trace viewable in DevTools. defaultPass.devtoolslog.json: log of DevTools Protocol events.

Each gatherer extends a base Gatherer class and implements lifecycle methods ( startInstrumentation, stopInstrumentation, getArtifact, etc.). Example for JavaScript usage gatherer:

class JsUsage extends FRGatherer {
  meta = { supportedModes: ['snapshot', 'timespan', 'navigation'] };
  constructor() { super(); this._scriptUsages = []; }
  async startInstrumentation(context) {
    const session = context.driver.defaultSession;
    await session.sendCommand('Profiler.enable');
    await session.sendCommand('Profiler.startPreciseCoverage', { detailed: false });
  }
  async stopInstrumentation(context) {
    const session = context.driver.defaultSession;
    const coverageResponse = await session.sendCommand('Profiler.takePreciseCoverage');
    this._scriptUsages = coverageResponse.result;
    await session.sendCommand('Profiler.stopPreciseCoverage');
    await session.sendCommand('Profiler.disable');
  }
  async getArtifact() {
    const usageByScriptId = {};
    for (const scriptUsage of this._scriptUsages) {
      if (scriptUsage.url === '' || scriptUsage.url === '_lighthouse-eval.js') continue;
      usageByScriptId[scriptUsage.scriptId] = scriptUsage;
    }
    return usageByScriptId;
  }
}

Trace Processing

The file core/lib/tracehouse/trace-processor.js converts raw trace events into meaningful objects. A typical trace event includes pid, tid, timestamp, duration, and other metadata.

{
  'pid': 41904, // process ID
  'tid': 1295, // thread ID
  'ts': 1676836141, // timestamp in microseconds
  'ph': 'X', // event type
  'cat': 'toplevel', // category
  'name': 'MessageLoop::RunTask', // description
  'dur': 64, // duration in microseconds
  'args': {}
}

Processed Trace

Processed traces identify key moments (navigation start, FCP, LCP, DCL, trace end) and filter main‑process and main‑thread events.

{
  processEvents: [/* all trace events in the main process */],
  mainThreadEvents: [/* all trace events on the main thread */],
  timings: {
    timeOrigin: 0,
    msfirstContentfulPaint: 150,
    /* other key moments */
    traceEnd: 16420
  },
  timestamps: {
    timeOrigin: 623000000,
    firstContentfulPaint: 623150000,
    /* other key moments */
    traceEnd: 639420000
  }
}

Implementation Steps

Connecting to the browser.

Resetting state with about:blank.

Navigating to about:blank.

Benchmarking the machine.

Initializing.

Preparing the target for navigation mode.

Running the default pass.

Cleaning origin data and browser cache.

Preparing network conditions.

Beginning devtools log and trace.

Loading the page and waiting for onload.

Navigating to the target URL.

Gathering in‑page data.

Gathering trace and devtools log.

Finalizing artifacts and generating the report.

Auditing

Audits

Audits test individual functions, optimizations, or metrics. Gathered artifacts serve as input, and each audit returns a score.

Computed artifacts are derived from raw artifacts and may be shared across multiple audits.

Audits Configuration Example

{
  audits: [
    'first-contentful-paint',
    'byte-efficiency/uses-optimized-images'
  ]
}

Report Generation

The client creates a report page from the generated LHR.json (Lighthouse Result). The report contains five categories: Performance, Accessibility, Best Practices, SEO, and PWA, each with sub‑audits, diagnostics, and optimization suggestions.

Scoring Model

Performance scores are derived from raw metric values (in milliseconds) by locating each value within a log‑normal distribution built from real‑world data in the HTTP Archive. The 25th percentile maps to a score of 50, and the 8th percentile maps to 90.

Score ranges are colored: 0‑49 (red) = poor, 50‑89 (orange) = needs improvement, 90‑100 (green) = good. Websites should aim for scores in the 90‑100 range for a good user experience.

References

Architecture: https://github.com/GoogleChrome/lighthouse/blob/main/docs/architecture.md

Puppeteer: https://github.com/puppeteer/puppeteer

WebSocket: https://github.com/websockets/ws

Better debugging of the Protocol: https://github.com/GoogleChrome/lighthouse/issues/184

DevTools Protocol: https://chromedevtools.github.io/devtools-protocol/

Trace event documentation: https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview

Performance scoring: https://web.dev/performance-scoring/

HTTP Archive: https://httparchive.org/reports/state-of-the-web

TTI scoring curve exploration: https://www.desmos.com/calculator/o98tbeyt1t

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

frontendweb performanceLighthouseChrome DevToolsPerformance Audits
ELab Team
Written by

ELab Team

Sharing fresh technical insights

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.