How Deep Is Your DOM? Measuring the Real Impact of DOM Depth on Rendering Speed
This article explores how the depth of a webpage's DOM tree influences memory usage, style calculations, and rendering performance, presenting experiments that compare shallow and deeply nested structures and offering practical tips for monitoring and optimizing DOM depth.
This article is translated from “How Deep is Your DOM?”; see the original at the bottom.
When using Lighthouse to measure website performance, you may encounter the “Avoid excessive DOM” warning. It looks like this:
Lighthouse warns us to avoid too many DOM nodes because they increase memory usage and can trigger expensive style calculations. Combined with other factors on your site, this can affect user experience, especially on low‑end devices.
While reviewing a performance report, the warning caught my eye, but the metric that made me rethink was not the total number of DOM elements—it was the maximum DOM depth reported. This raised the question:
How does DOM depth affect rendering performance?
When we use tree‑like data structures such as the DOM, the depth of the tree has a large impact on the speed of operations like look‑ups. Consider these two DOM trees:
Both trees contain the same total number of elements, but one has a shallow depth of 2 while the other is deeper at depth 6. The deeper tree requires more operations to reach a given element.
For example, to access an <img> element from the root: body.children[4]; In the shallow tree this needs only two steps. In the deeper tree you would need six jumps:
body.children[0].children[0].children[0].children[0].children[0];Tree depth is especially important for structures like binary search trees, which is why many self‑balancing BST implementations aim to keep the height minimal for fast look‑ups.
Theoretically, a deeper tree is slower, but how much does it affect real‑world rendering performance?
Simple Test
To investigate, I created two HTML pages, each containing three lines of text and 100 empty <div> elements. The only difference is that in one page all <div> s are placed directly under <body>, while in the other they are nested.
Shallow page example:
<html>
<body>
<div></div>
<div></div>
<div></div>
<div></div>
<!-- 95 divs later... -->
<div>This is the last of 100 divs.</div>
</body>
</html>Deeply nested page example:
<html>
<body>
<div>
<div>
<div>
<div>
<!-- 95 divs later... -->
<div>This is the last of 100 divs.</div>
</div>
</div>
</div>
</div>
</body>
</html>The first page loaded in 51 ms (parse + render + paint), while the second took 53 ms—only a slight difference. Testing with 200, 300, 400, and finally 500 <div> s revealed a clear gap.
With 500 <div> s, the shallow page rendered in 56 ms, barely affected by the element count, whereas the deep page took 102 ms—almost twice as long. Testing up to 5 000 <div> s showed the same trend.
Both pages have identical size and the same number of elements; the only difference is DOM tree depth, which clearly impacts rendering performance. While a depth of 5 000 is unrealistic for most sites, thousands of elements at moderate depths are common.
Even a depth of 32 can require hundreds of milliseconds for parsing, rendering, and painting before CSS and JavaScript are applied, and those layers add further overhead.
Adding CSS Styles
For deep DOM trees, a potential performance bottleneck is expensive style recalculations. Building on the 5 000‑nested‑ <div> test, I added some text to each <div> and a CSS rule:
div {
padding-top: 10px;
}This rule touches almost every element, forcing the browser to spend considerable time recomputing each <div> 's position, potentially blocking the main thread and degrading responsiveness.
The experiment measured only page‑load performance, but in real usage users also interact with the page after the initial load. Monitoring DOM size and depth matters not because they dramatically slow the first load, but because they set a baseline for all subsequent runtime operations, such as JavaScript‑driven DOM updates.
Conclusion
From this small experiment I drew two main conclusions.
First, modern browsers are astonishingly capable—they can parse, render, and paint a DOM tree with thousands of nested layers in just a few milliseconds. While real‑world sites rarely need a 5 000‑level depth, seeing browsers handle such loads is impressive.
Second, deep DOM trees may not have as dramatic an impact as expensive JavaScript, but they do introduce measurable differences that can quickly accumulate, making them worth keeping an eye on.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
