Monitoring and Inspection Practices for Enterprise Front‑End Applications
This article describes how a large enterprise front‑end team implements real‑time monitoring, scheduled inspections, alert strategies, performance metrics, error handling, custom reporting, and mobile/native monitoring to ensure system stability, improve user experience, and continuously optimize application performance.
Modern front‑end applications are increasingly complex, and user experience depends heavily on performance and stability; therefore the team introduced comprehensive monitoring and inspection solutions.
Monitoring background and significance – Integrating the internal SGM monitoring platform and alert system enables real‑time visibility of performance indicators (LCP, CLS, FCP, FID, TTFB) and error events, helping developers, product, and operations teams quickly detect and resolve issues.
Monitoring categories – The solution consists of two parts: real‑time monitoring of all 100+ applications and scheduled automated inspections using UI‑Woodpecker plugins or custom Node.js scripts.
Real‑time monitoring details – Alert precision is balanced with sensitivity to avoid noise; a multi‑level alarm mechanism routes critical alerts via phone and less urgent ones via email or messaging. Responsibilities are clearly assigned, and regular rule optimization keeps alerts effective. Metrics such as LCP (≤2500 ms), CLS (<0.1), FCP (≤1.8 s), FID (≤100 ms), and TTFB (≤1000 ms) are tracked, with temporary LCP thresholds raised to 5 s to reduce false alarms while improvements are made.
White‑screen detection and alert configuration – White‑screen monitoring checks whether key DOM elements are rendered. CORS errors (“Script error”) are mitigated by enabling the <script src="http://xxxdomain.com/home.js" crossorigin></script> attribute and adding a Vue error handler:
Vue.config.errorHandler = (err, vm, info) => {
if (err) {
try {
console.error(err);
window.__sgm__.error(err);
} catch (e) {}
}
};API request monitoring – Alerts focus on HTTP status codes and business error codes. Business error codes are standardized across lines, e.g.:
{
"50000X": "程序异常,内部",
"500001": "程序异常,上游",
"500002": "程序异常,xx",
"500003": "程序异常,xx"
// ...
}Custom reporting captures request parameters and responses for rapid diagnosis, illustrated by JSON configurations such as:
{
"cookieThor": "",
"urlPattern": "pro\\.jd\\.com",
"urls": ["https://b.jd.com/s?entry=newuser"]
}Additional configurations define hover and click element checks for link validation:
{
"cookieThor": "",
"urlPattern": "///pro.jd.com/",
"urls": [{
"url": "https://b.jd.com/s?entry=newuser",
"hoverElements": [{"item": "#focus-category-id .focus-category-item", "target": ".focus-category-item-subtitle"}]
}]
} {
"cookieThor": "",
"urlPattern": "///pro.jd.com/",
"urls": [{
"url": "https://b.jd.com/s?entry=newuser",
"clickElements": [{"item": "#recommendation-floor .drip-tabs-tab"}]
}]
}Resource error monitoring – Handles CSS/JS/image loading failures; image errors are ignored via downgrade strategies.
Custom reporting for key business flows – Captures interface parameters, user‑behavior traces, and specific failure scenarios (e.g., address selection errors in embedded H5 pages) to aid debugging.
Mobile and mini‑program monitoring – Uses mPaaS crash monitoring, Zhulong performance metrics, and SGM for network/WebView indicators. Business monitoring is applied to login, product detail, and order detail modules with detailed rule sets for each UI component.
Scheduled inspection – Two approaches: (1) UI‑Woodpecker platform schedules tasks; (2) self‑started scripts run automated checks. Example inspection configuration:
{
"cookieThor": "",
"urlPattern": "///pro.jd.com/",
"urls": [{
"url": "https://b.jd.com/s?entry=newuser",
"clickElements": [{"item": ".recommendation-product-wrapper .jdb-sku-wrapper"}]
}]
}Inspection tools verify link validity after hover or click events, ensuring promotional and external links remain functional.
Results before and after integration – Prior to monitoring, issues were discovered only via user feedback. After integration, real‑time alerts, performance dashboards, and automated inspections enabled proactive detection, reducing error rates, improving page‑load scores (50+ projects ≥85 pts), and shortening MTTR.
Future plans – Aim for >90 % of applications scoring ≥90 pts, deepen monitoring granularity (e.g., button‑level permission errors), and upgrade the Chrome inspection plugin for broader coverage.
The team invites feedback and collaboration to continuously refine monitoring strategies.
JD Tech Talk
Official JD Tech public account delivering best practices and technology innovation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.