How to Supercharge Your 2020 Front‑End Performance: The Ultimate Checklist
This comprehensive 2020 front‑end performance checklist guides teams through planning, metric selection, environment setup, static asset optimization, build and transport strategies, testing, monitoring, and realistic goal setting, offering tools, budgets, and cultural practices to achieve faster, more responsive web experiences.
Introduction
Make 2020 faster! This is an annual front‑end performance checklist covering everything needed for a fast web experience. It has been updated for five years since 2016 and is supported by LogRocket, a front‑end performance monitoring solution.
Web performance is a tricky beast, isn’t it?How can we know our real performance level and bottlenecks—large JavaScript files, slow web‑font delivery, heavy images, or slow rendering? Should we study tree‑shaking, scope hoisting, code splitting, lazy loading with IntersectionObserver, server push, client hints, HTTP/2, service workers, or edge workers? Most importantly, where do we start optimizing performance and how do we build a long‑term performance‑focused team culture?
In the past, performance was often ignored early and postponed to the end, reduced to code minification, parallel requests, static assets, or server config tweaks. Now performance optimization has changed dramatically.
Performance is not just a technical issue; it affects accessibility, usability, SEO, and if we embed performance into the workflow, design and decisions must consider its impact. Performance must be continuously measured, monitored, and improved , but the growing complexity of the web brings new challenges for tracking metrics, which vary by device, browser, protocol, network type, and latency (CDN, ISP, cache, proxy, firewall, load balancer, server config all affect web performance).
So, what would a complete checklist of performance optimization points (from development start to final launch) look like? Below you will find the 2020 Front‑End Performance Checklist —a concise overview of scenarios and optimization techniques to achieve fast response times, smooth interactions, and bandwidth‑friendly sites.
Table of Contents
Preparation: Planning and Metrics
Choosing the Right Metrics
Defining the Environment
Static Asset Optimization
Build Optimization
Transport Optimization
Network & HTTP/2
Testing & Monitoring
Quick‑Start Solutions
Download Checklist (PDF, Apple Pages, MS Word)
Let’s Go!
Preparation: Planning and Metrics
Small, frequent optimizations help, but it is more important to set clear, measurable performance goals early, as they influence every decision. Several performance metric models exist; choose what fits your project.
01 Build a Performance‑Optimisation Culture
Many teams know common performance problems and solutions, but without a culture that values performance, each decision becomes a siloed debate. To gain business support, conduct case studies or use the Performance API to prove performance improvements for key business metrics (KPIs).
If development/design and business/marketing cannot align on performance goals, optimisation will not last. Focus on common complaints such as high bounce rates or low conversion, adjust your arguments based on stakeholder feedback, and run performance experiments on both mobile and desktop (e.g., with Google Analytics). Use research from WPO Stats to back up the business impact of performance.
Allison McKnight’s talk on building a long‑term performance culture and Tammy Everts’ discussion on fostering performance awareness in teams provide useful case studies.
Brad Frost’s performance budget generator and Jonathan Fielding’s Performance Budget Calculator can help you set and visualise performance budgets.
02 Goal: Be At Least 20% Faster Than Your Fastest Competitor
Psychology research shows users perceive a site as fast only if it is at least 20% faster than competitors. Study competitors’ mobile and desktop metrics, set thresholds, and use 90% of user feedback to simulate tests.
Use the Chrome UX Report (CrUX) or the Treo site (powered by CrUX) for real‑user data, or alternatives like Speed Scorecard , Real User Experience Test Comparison , and SiteSpeed CI .
Treo Sites provides competitive analysis based on real‑world data.
Note: When using Page Speed Insights or its API, you can also retrieve CrUX data for specific pages, useful for landing pages or product listings. If you test performance budgets in CI, ensure the test environment matches CrUX conditions.
To understand why speed rankings matter, Sergey Chernyshev developed an UX Speed Calculator that visualises the impact of performance on bounce rate, conversion, and revenue.
The UX Speed Calculator visualises performance impact on bounce rates, conversion, and total revenue based on real data.
Collect data, create tables, and set a 20% performance improvement target (performance budget). Aim to keep the smallest effective payload to achieve a fast interactive time.
Need resources to start?
Addy Osmani’s article on how to start a performance budget explains quantifying new feature impact and where to begin when you exceed the budget.
Lara Hogan’s guide on designing under a performance budget helps designers.
Harry Roberts’ guide on using Request Map with Google Sheets to show third‑party script impact.
Jonathan Fiding’s and Katie Hempenius’ Performance Budget Calculators and the Browser Calories tool (thanks to Karolina Szczur) help create budgets.
Many tools can visualise budgets with build‑size graphs, such as SiteSpeed.io , SpeedCurve , and Calibre . Find more tools at Perform.rocks .
Once you have a budget, integrate it with Webpack performance hints, Bundlesize , Lighthouse CI , PWMetrics , or Sitespeed CI so that pull requests receive performance scores.
Publish budgets via Lightwallet integration in Lighthouse or LHCI actions in GitHub Actions. For custom visualisation, use the WebPagetest chart API .
Like Pinterest, you can create a custom ESLint rule to forbid heavy dependencies that bloat bundles, sharing a “safe‑to‑use” package list across the team.
Define acceptable thresholds for critical user actions and establish an organisation‑wide “UX Ready” timing marker. Align cross‑department paths to reduce later performance debates.
During UX design, decide component priority early; this influences CSS and JavaScript import order, making build‑time ordering easier. For animations, aim for idle‑until‑urgent patterns.
Planning, planning, planning. Quick wins are tempting, but without realistic, company‑specific performance goals, long‑term maintenance will suffer.
First Contentful Paint, First Meaningful Paint, First Meaningful Paint, Rendering Completion, and Time to Interactive differences are explained in a referenced slide deck.
In early 2020, new metrics will appear in Lighthouse v6: Largest Contentful Paint (LCP) and Total Blocking Time (TBT) , while First Meaningful Paint (FMP) is deprecated.
03 Choose the Right Metrics
Not all metrics are equally important. Choose metrics that reflect the speed of showing the most important pixels and the responsiveness of those pixels. The best metrics focus on user experience, not just load time or server response.
Tim Kadlec and Marcos Iglesias classify metrics into groups: quantity‑based (request count, weight, score), milestone‑based (time to first byte, time to interactive), rendering‑based (start render, speed index), and custom (business‑specific events). For most cases, the most relevant are:
Time to Interactive (TTI) : when layout is stable, critical fonts are visible, and the main thread is idle for user input.
First Input Delay (FID) or input responsiveness: time from first user interaction to browser response.
Largest Contentful Paint (LCP) : when the most important content element is painted.
Total Blocking Time (TBT) : duration of long tasks (>50 ms) between first paint and interactive.
Cumulative Layout Shift (CLS) : frequency of unexpected layout changes.
Speed Index : visual fill speed (less important now that LCP is available).
CPU time spent: shows main‑thread blocking frequency and duration.
Note that First Meaningful Paint (FMP) is deprecated and will be removed from Lighthouse.
Steve Souders provides detailed explanations for each metric. TTI is measured in lab audits, while FID requires real‑user data.
Metric importance varies by application: Netflix TV UI cares about input responsiveness, CPU usage, and TTI; Wikipedia cares about visual change timings and CPU metrics.
Note: FID and TTI do not consider scrolling; scrolling can be measured separately.
Network requests first‑input‑delay
User‑centric performance metrics better reflect real user experience. First Input Delay is a new metric aiming to achieve this.
04 Collect Data on Typical User Devices
To collect accurate data, choose representative devices based on market share. Android accounts for 87% of global phones; consumers upgrade every two years; in the US the upgrade cycle is 33 months. A representative slow device might be a 24‑month‑old Android phone under 200 USD on a 3G network with 400 ms RTT and 400 kbps throughput.
Typical test devices include Moto G4/G5 Plus, Samsung mid‑range (Galaxy A50, S8), Nexus 5X, Xiaomi A3 or Redmi Note 7, and a cheap Nexus 4.
When testing, avoid focusing on a single chipset; include Snapdragon generations, Apple CPUs, and low‑end Rockchip or MediaTek chips.
If you lack devices, simulate on desktop with throttled 3G (300 ms RTT, 1.6 Mbps down, 0.8 Mbps up) and CPU slowdown (5×). Then test on slower 3G, slower 4G, and Wi‑Fi. Some teams introduce a 2G network every Tuesday to surface slow‑network issues.
tuesday‑2g‑opt
Facebook’s “2G Tuesdays” make slow‑network testing easier.
Many tools help automate data collection and track performance over time. Use lab tools (Lighthouse, Calibre, WebPageTest) for development and Real‑User Monitoring (SpeedCurve, New Relic) for production.
Combine RUM APIs (Navigation Timing, Resource Timing, Rendering Timing, Long Tasks) with tools like PWMetrics, Calibre, SpeedCurve, mPulse, Boomerang, and Sitespeed.io. Server‑Timing headers can also expose backend performance.
Note: External network throttlers are more reliable than browser‑built‑in throttling. Use Network Link Conditioner on macOS, Windows Traffic Shaper, netem on Linux, or virtual network on FreeBSD.
lighthouse‑screenshot
Lighthouse is a performance audit tool built into Chrome DevTools.
05 Create “Clean” and “Customer” Profiles for Testing
When running tests, use a clean profile: disable antivirus, background CPU tasks, background bandwidth usage, and browser extensions. This avoids skewed results in Firefox and Chrome.
Also create a “customer” profile that mirrors common extensions used by users, as extensions can have a noticeable performance impact.
06 Share Performance Culture with Your Team
Ensure every team member understands performance culture to avoid misunderstand‑related decisions. Assign responsibility and ownership, weigh design decisions against performance budgets, and use performance budgets and priorities to guide choices.
Set Realistic Goals
07 Response Time 100 ms, 60 fps
For smooth interaction, aim for <100 ms response time; ideally <50 ms input latency. The RAIL model defines a healthy page as <100 ms response and <50 ms main‑thread hand‑off per frame. Animation frames should finish within 16 ms (ideally <10 ms) to achieve 60 fps. 120 fps solutions exist but are beyond current goals.
RAIL is a user‑centred performance model.
08 3G Environment: FID < 100 ms, TTI < 5 s, Speed Index < 3 s, Critical File Size < 170 KB (gzipped)
Target Speed Index < 3 s, TTI < 5 s (2 s for repeat visits with Service Workers), LCP < 1 s, and keep total blocking time low. Simulate on a 20‑USD Android phone (e.g., Moto G4) on a slow 3G network (400 ms RTT, 400 kbps).
TCP slow‑start limits the first 14 KB of HTML to be transferred in the first round‑trip; this is the most critical chunk for fast page load.
TCP BBR congestion control can improve throughput and latency compared to classic TCP.
JavaScript size budgets are crucial: a 170 KB gzipped bundle may expand to ~0.7 MB when decompressed, which is already heavy for low‑end devices.
For emerging markets, consider stricter budgets; Addy Osmani recommends a <30 KB gzipped initial JS bundle (PRPL‑30).
According to Addy Osmani, delayed‑load route JS should be < 35 KB.
Addy Osmani suggests a PRPL‑30 budget (30 KB gzipped) for smartphones.
Average JS bundle size is now ~417 KB (up 42% since 2015), leading to 15‑25 s interactive time on mid‑income mobile devices.
2019 top‑selling smartphones CPU benchmarks.
JS bundle size limits are not absolute; you can exceed them if you track main‑thread CPU usage with tools like Calibre, SpeedCurve, or Bundlesize, and integrate them into your build pipeline.
Performance budgets should adapt to network conditions; slower connections make every kilobyte expensive.
With HTTP/2, 5G, powerful phones, and SPA prevalence, rigid budgets may seem odd, but network variability, data caps, proxies, and roaming costs still justify them.
From Addy Osmani’s “Fast by Default: Modern Loading Best Practices”.
Performance budgets should be adjusted based on typical mobile network conditions (image source: Katie Hempenius).
Define the Environment
09 Choose and Configure Build Tools
Don’t chase “cool” tools; stick with a build process that works (Grunt, Gulp, Webpack, Parcel, or combos). Webpack is mature with many plugins; Rollup is gaining traction.
Webpack documentation, code obfuscation articles, and annotated configs are good starting points.
Sean Larkin’s free “Webpack Core Concepts” course and Jeffrey Way’s “Webpack for Everyone” are beginner‑friendly.
Webpack fundamentals 4‑hour course on Frontend Masters.
Advanced guides: “Improving Build Performance with Webpack” and research on bundle compression.
Webpack examples repository and configuration generators.
awesome‑webpack collection of resources.
10 Default to Progressive Enhancement
Progressive enhancement remains a solid principle: build core experience first, then layer advanced browser features for richer experiences. This yields resilient experiences on low‑end devices and even better performance on high‑end devices.
Adaptive service modules let you serve a “lean” core to low‑end devices and richer features to high‑end devices.
11 Choose a Good Performance Baseline
Many factors affect load performance (network, load balancers, caches, third‑party scripts, parsers, I/O, etc.). JavaScript’s cost is high, second only to web fonts and images. As performance bottlenecks shift to the client, developers must consider network transfer cost, parsing/compilation time, and runtime cost.
Assess frameworks for network cost, parsing time, and runtime cost. Modern browsers have improved script parsing speed, but execution remains a major bottleneck.
Seb Markbège suggests measuring framework startup cost by rendering a view, then removing it and re‑rendering to see the cost of code warm‑up.
12 Evaluate Frameworks and Dependencies
Not every web app needs a front‑end framework; even SPA pages may not need a framework. Netflix removed React from its login page, cutting interactive time by >50 % and later pre‑fetches React for subsequent pages.
Some projects benefit from removing frameworks entirely. Choose wisely, evaluate performance, and consider long‑term maintenance.
Inian Parameshwaran evaluated the top 50 frameworks; Vue and Preact were fastest overall, followed by React.
Ankur Sethi’s research shows React apps on average Indian phones never load faster than ~1.1 s, Angular ~2.7 s, Vue ~1 s.
Use tools like webpack‑bundle‑analyzer, Source Map Explorer, Bundle Buddy, Bundlephobia, size‑plugin, and Import Cost to assess bundle impact.
Seed projects: Gatsby (React), Vuepress (Vue), Preact CLI, and PWA Starter Kit provide solid defaults.
CPU and compute performance of best‑selling phones (image source: Addy Osmani).
13 Consider PRPL Pattern and App Shell Architecture
Use the PRPL pattern (Push, Render, Pre‑cache, Lazy‑load) and App Shell to deliver the minimal code needed for the initial route, then cache and lazily load remaining resources.
PRPL stands for pushing critical resources, rendering the initial route, pre‑caching remaining routes, and lazy‑loading remaining routes on demand.
App Shell is the minimal HTML, CSS, and JavaScript that supports the UI.
14 Optimize API Performance
When designing APIs, use a sensible protocol. RESTful APIs are widely validated; however, they can become performance bottlenecks. GraphQL allows a single request to fetch exactly the needed data, reducing over‑fetching.
Eric Baer’s Smashing Magazine articles introduce GraphQL and its benefits.
REST vs. GraphQL comparison (image source: Hacker Noon).
15 AMP vs. Instant Articles vs. Apple News
Depending on strategy, consider Google AMP, Facebook Instant Articles, or Apple News. AMP provides a CDN‑backed performance framework; Instant Articles improve visibility on Facebook. Both can boost SEO but require separate maintenance.
AMP is generally faster but not always the best for every site.
16 Choose a CDN Wisely
Static site generators can push content to a CDN, avoiding database hits. JAMStack sites (Gatsby, Vuepress, Preact CLI, PWA Starter Kit) benefit from static hosting platforms. Ensure the CDN supports compression, image optimisation, server‑workers, and HTTP/3.
Note that HTTP/2 prioritisation is often ineffective on many CDNs; be cautious.
References
Original article: https://www.smashingmagazine.com/2020/01/front-end-performance-checklist-2020-pdf-pages
Performance API: https://developer.mozilla.org/en-US/docs/Web/API/Performance
Performance budget tools, metrics, and research links (see original source for full list).
WecTeam
WecTeam (维C团) is the front‑end technology team of JD.com’s Jingxi business unit, focusing on front‑end engineering, web performance optimization, mini‑program and app development, serverless, multi‑platform reuse, and visual building.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.