Inside Chrome: How Multi‑Process Architecture Powers Fast Rendering
This multi‑part article explains Chrome’s low‑level architecture—from CPU/GPU fundamentals and its multi‑process model to the step‑by‑step navigation flow, the rendering pipeline, and how the compositor processes input events—providing developers with a deep understanding of browser performance and best‑practice optimizations.
Part 1 – CPU, GPU, Memory and Multi‑Process Architecture
Chrome runs on a computer’s central processing unit (CPU) and graphics processing unit (GPU). The CPU is the brain of the system, with multiple cores handling many tasks in parallel. The GPU excels at parallel processing of simple operations, originally designed for graphics but now also used for general‑purpose computation.
When an application starts, the operating system creates a process with its own memory space. A process may spawn threads to perform work concurrently. Chrome isolates different browser responsibilities into separate processes to improve stability and security.
Chrome’s process model includes:
Browser process – controls the UI (address bar, bookmarks, navigation buttons) and privileged tasks such as network requests and file access.
Renderer processes – one per tab (or per site isolation group) to render web content.
GPU process – handles GPU‑accelerated drawing for all tabs.
Plugin process – runs legacy plugins like Flash.
Each process has its own memory, which increases overall RAM usage but isolates crashes and security breaches. Chrome limits the total number of processes based on device memory and CPU capacity, sometimes grouping multiple tabs from the same site into a single renderer to conserve resources.
Part 2 – What Happens During Navigation
When a user types a URL, the browser UI thread parses the input to decide whether it is a search query or a URL. It then initiates a network request via the network thread, handling DNS, TLS, redirects (e.g., HTTP 301), and receiving response headers.
The network thread examines the Content‑Type header and may perform MIME‑type sniffing (see Chromium’s mime_sniffer.cc) to confirm the payload type. HTML responses are handed off to a renderer process; other payloads (e.g., ZIP files) trigger the download manager.
Security checks (safe‑browsing, CORB) run before the data reaches the renderer. Once the network thread confirms the navigation, it selects or creates a renderer process and sends an IPC message to commit the navigation, after which the document load phase begins.
Service workers can intercept network requests. When a service worker is registered, its scope is stored. During navigation, the network thread checks the URL against registered scopes; if a match exists, the renderer runs the service worker code, which may serve cached content or fetch fresh resources.
Navigation pre‑loading runs parallel network requests alongside the service worker to reduce latency.
Part 3 – Inside the Rendering Process
The renderer consists of a main thread, optional Web/Service workers, a compositor thread, and a raster thread. Its core job is to turn HTML, CSS, and JavaScript into a visual page.
Parsing and DOM Construction
The main thread parses the incoming HTML stream into a Document Object Model (DOM). Errors such as missing closing tags are handled gracefully according to the HTML specification.
CSS Parsing and Style Calculation
Simultaneously, the CSS parser computes the style for each DOM node. Even without author CSS, default user‑agent styles apply (e.g., <h1> larger than <h2>).
Layout
Using the styled DOM, the layout engine builds a layout tree containing geometric information (x, y, width, height). Elements with display:none are omitted; those with visibility:hidden remain in the tree.
Paint
The main thread traverses the layout tree to generate paint records (e.g., background, text, rectangles). JavaScript can block parsing when it encounters <script> tags; using async or defer avoids this blockage.
Compositing
After paint, the compositor thread creates a layer tree. Developers can promote elements to their own layers with will‑change or CSS transform. The compositor rasterizes each layer into GPU memory and assembles compositing frames, which are sent to the browser process and finally displayed.
Because compositing runs on a separate thread, it can produce smooth animations without stalling the main thread, provided the page does not require frequent layout or paint updates.
Part 4 – How the Compositor Handles Input
Input events (touch, mouse, keyboard) first arrive at the browser process, which forwards the event type and coordinates to the renderer. The renderer performs a hit‑test using the paint records to find the target element.
Areas that have JavaScript event listeners are marked as “non‑fast‑scrollable.” For such regions the compositor must forward the event to the main thread; for fast‑scrollable regions it can handle scrolling directly, preserving smoothness.
Passive Listeners and Event Delegation
Attaching a single listener to document.body (event delegation) marks the whole page as non‑fast‑scrollable, forcing the compositor to wait for the main thread on every event. Adding the {passive:true} option tells the browser that the listener will not call preventDefault(), allowing the compositor to continue scrolling independently.
document.body.addEventListener('touchstart', event => {
if (event.target === area) {
event.preventDefault();
}
}, {passive:true});Coalescing and Delaying Events
High‑frequency events (wheel, mousemove, touchmove) are coalesced and dispatched only on the next requestAnimationFrame to avoid overwhelming the main thread. Discrete events (click, keydown) are sent immediately.
Retrieving Coalesced Events
For drawing‑heavy applications, developers can call event.getCoalescedEvents() to obtain the original high‑frequency points that were merged.
window.addEventListener('pointermove', e => {
const events = e.getCoalescedEvents();
for (const ev of events) {
const x = ev.pageX;
const y = ev.pageY;
// draw using x, y
}
});By understanding these internal mechanisms, developers can write code that cooperates with Chrome’s architecture—using async scripts, passive listeners, and appropriate CSS hints—to achieve better performance and smoother user experiences.
Conclusion
Chrome’s multi‑process design, navigation pipeline, rendering stages, and compositor‑driven input handling work together to deliver fast, secure, and responsive web pages. Aligning web code with these internals—through async loading, passive event listeners, service‑worker strategies, and layer‑optimizations—helps developers build sites that fully leverage the browser’s capabilities.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
