Why the Second Call to a JavaScript Constructor Is Slower: Inside V8’s Inline Caches
An in‑depth analysis of V8’s Inline Cache mechanism shows how hidden classes, map transitions, type‑feedback vectors, and the IC state machine cause the second invocation of a JavaScript constructor to be slower than the first, while the third becomes faster, explaining performance variations in property accesses.
Introduction
JavaScript’s dynamic typing provides flexibility but can degrade runtime performance because type changes invalidate many optimizations. Modern JavaScript engines mitigate this with Inline Caches (IC) , which cache object layout information to accelerate property reads and writes. The following sections use a simple Point constructor to illustrate V8’s IC implementation.
Example and measurement
function Point(x, y) {
this.x = x;
this.y = y;
}
var p = new Point(0, 1);
var q = new Point(2, 3);
var r = new Point(4, 5);On a 3.2 GB, 8‑core machine the execution time of the this.x = x assignment was measured for each call:
p = new Point(0,1) : 4.11 ns
q = new Point(2,3) : 6.63 ns
r = new Point(4,5) : 0.65 ns
The second call is slower than the first, while the third is noticeably faster. The following sections explain why.
Hidden classes (shapes/maps)
Because JavaScript objects lack static type information, V8 assigns each object a hidden class (also called a Shape or Map) that describes its layout—property count, names, and memory offsets. All objects created by the same constructor share the same hidden class initially.
Map transitions
When a property is added for the first time, V8 creates a new hidden class derived from the previous one. In the Point example the constructor’s hidden class map0 is empty. Executing this.x = x creates map1, and executing this.y = y creates map2. The transition sequence is visualized below:
Type feedback vector
Each JavaScript function in V8 has a type_feedback_vector array that stores, for each IC site, the last observed hidden class (Map) and the compiled IC‑Hit handler. For the Point constructor there are two IC sites: this.x = x and this.y = y. After the first execution the vector records <map0, ic‑hit handler> for the first site and <map1, ic‑hit handler> for the second.
IC state machine
V8 classifies an IC into five states, progressing on successive IC‑Misses:
Uninitialized
Pre‑monomorphic
Monomorphic
Polymorphic
Megamorphic
The diagram below shows the typical progression:
Why the second call is slower
During the first call the type_feedback_vector is empty, so an IC‑Miss occurs and the state moves to pre‑monomorphic . V8 adds property x (creating map1) but does not cache the map or generate an IC‑Hit handler because the function is expected to be called only once.
The second call also sees an empty type_feedback_vector, causing another IC‑Miss. The state advances to monomorphic , and V8 now compiles an IC‑Hit handler and caches map0 in the vector. Compiling the handler is expensive, making the second execution slower than the first.
On the third call the cached map0 matches the current object’s hidden class, so V8 can directly invoke the IC‑Hit handler without recompilation, resulting in the fastest execution.
Polymorphic and megamorphic cases
function f(o) { return o.x; }
// pre‑monomorphic
f({x:1});
// monomorphic
f({x:2});
// polymorphic (degree 2)
f({x:3, y:1});
// polymorphic (degree 3)
f({x:4, z:1});
// polymorphic (degree 4)
f({x:5, a:1});
// megamorphic
f({x:6, b:1});When a function sees objects with many different hidden classes, the IC transitions to polymorphic (handling a few maps) and eventually to megamorphic (too many maps). In the megamorphic state V8 stores the maps in a global hashtable rather than the function’s feedback vector, leading to higher lookup costs.
Performance implications
Code that stays in the monomorphic IC state runs fastest.
In polymorphic state V8 performs a linear search over a small set of cached maps.
Megamorphic IC‑Hits are slower because they require a global hashtable lookup, though still faster than an IC‑Miss.
An IC‑Miss is the most expensive path.
For developers concerned with cold‑start performance, minimizing IC‑Misses—by keeping object shapes consistent—can significantly reduce startup latency.
References
https://v8.dev/docs
https://mathiasbynens.be/notes/shapes-ics
https://richardartoul.github.io/jekyll/update/2015/04/26/hidden-classes.html
https://mrale.ph/blog/2012/06/03/explaining-js-vms-in-js-inline-caches.html
https://mrale.ph/blog/2015/01/11/whats-up-with-monomorphism.html
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Tencent Cloud Middleware
Official account of Tencent Cloud Middleware. Focuses on microservices, messaging middleware and other cloud‑native technology trends, publishing product updates, case studies, and technical insights. Regularly hosts tech salons to share effective solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
