Measuring Frontend Performance with Performance Timeline and User Timing APIs
This article guides front‑end developers through the W3C Performance Timeline and User Timing specifications, demonstrating how to use performance.mark, performance.measure, and PerformanceObserver APIs—including async/await patterns—to accurately profile and monitor code execution across browsers.
The author, a front‑end engineer from 360 Qiwutuan and a W3C Performance Working Group member, introduces the Performance Timeline and User Timing specifications and shows how to "score" front‑end code using their APIs.
Why learn these standards? In real projects, performance‑heavy operations (especially frequent DOM manipulations) need quantification. Traditional approaches use Date.now() before and after a function, but this quickly becomes cumbersome when many functions must be measured. The new standards let browsers collect the data automatically.
What is Performance Timeline? According to the W3C definition, it provides a way for web developers to access, inspect, and retrieve various performance metrics throughout a web application's lifecycle.
What is User Timing? It extends the original Performance interface, adding methods that let developers actively record performance marks and measures.
Both specifications were at Level 2 as of July 2018 and were still draft.
Browser compatibility tables show support for PerformanceObserver (the main API of Performance Timeline Level 2) and for User Timing across major browsers.
Basic usage
const prefix = fix => input => `${fix}${input}`;
const prefixStart = prefix('start');
const prefixEnd = prefix('end');
const measure = (fn, name = fn.name) => {
performance.mark(prefixStart(name));
fn();
performance.mark(prefixEnd(name));
};
// later
performance.measure(name, prefixStart(name), prefixEnd(name));Calling performance.mark creates a PerformanceMark entry; calling performance.measure creates a PerformanceMeasure entry that automatically calculates the duration between two marks.
Retrieving data
const getMarks = key => {
return performance.getEntriesByType('mark')
.filter(({ name }) => name === prefixStart(key) || name === prefixEnd(key));
};
const getDuration = entries => {
const { start, end } = entries.reduce((last, { name, startTime }) => {
if (/^start/.test(name)) last.start = startTime;
else if (/^end/.test(name)) last.end = startTime;
return last;
}, { start: 0, end: 0 });
return end - start;
};
const retrieveResult = key => getDuration(getMarks(key));For asynchronous functions the pattern is the same, only await is added:
const asyncMeasure = async (fn, name = fn.name) => {
const startName = prefixStart(name);
const endName = prefixEnd(name);
performance.mark(startName);
await fn();
performance.mark(endName);
performance.measure(name, startName, endName);
};Advanced usage with PerformanceObserver
const observer = new PerformanceObserver(list => {
list.getEntries().forEach(({ name, startTime }) => {
console.log(name, startTime);
// custom logic here
});
});
observer.observe({ entryTypes: ['mark'], buffered: true });Using PerformanceObserver eliminates the need to manually call getEntriesByType each time; the observer automatically receives new entries. The buffered option determines whether already‑recorded entries are delivered immediately (default false ).
Putting it all together
// measure.js
export const getMeasure = () => {
const observer = new PerformanceObserver(list => {
list.getEntries().forEach(({ name, duration }) => {
console.log(name, duration);
// handle measurement data
});
});
observer.observe({ entryTypes: ['measure'], buffered: true });
return observer;
};
// entry point
let observer;
if (window.PerformanceObserver) {
observer = getMeasure();
}
// later, when monitoring is no longer needed
if (observer) observer.disconnect();The article concludes with a set of practical notes: the APIs work for both synchronous and asynchronous code, data reporting should be done judiciously (e.g., via user‑agent‑based gray‑listing), and the focus should be on real‑world performance bottlenecks rather than exhaustive benchmarking. It also clarifies that these tools complement, not replace, dedicated benchmark libraries.
Overall, Performance Timeline + User Timing provide front‑end developers with powerful, low‑overhead instrumentation for performance‑sensitive projects.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.