Operations 10 min read

How We Built a Robust Monitoring System for Construction Drawing Production

This article describes how our team designed and implemented a comprehensive online monitoring system for construction drawing generation, covering business background, technical architecture analysis, metric definition, monitoring methods, and the resulting dashboards that improve quality, stability, and rapid issue resolution.

Qunhe Technology Quality Tech
Qunhe Technology Quality Tech
Qunhe Technology Quality Tech
How We Built a Robust Monitoring System for Construction Drawing Production

Preface

Construction drawings are a key tool for custom production. After designers finish scene design with design tools, the drawing export function quickly generates DXF files for factory production. Because of high industry specificity, modeling methods and spatial positions vary, resulting in inconsistent output that cannot be fully tested across all scenarios in a short time.

To ensure online quality, the drawing team explored various safeguards such as business inspections, online traffic, and online monitoring, achieving good results.

The monitoring system took the longest to establish but delivered the greatest impact, providing fast, comprehensive detection of online issues and reducing troubleshooting costs. We share the establishment process here.

1. Business Background

To support the industry’s “what you see is what you get” vision, a key focus is accurate and refined construction drawings. Traditional home decoration relied on on‑site communication and worker experience, leading to large gaps between finished products and renderings.

The custom drawing tool offers merchants independent editing, preview, annotation, and generation of detailed information based on drawing frames, converting design‑tool models into production‑ready views.

2. Monitoring System Diagram

Given the business background, functional and automated testing alone cannot guarantee compliance with merchant production processes. Therefore we built a comprehensive monitoring diagram, described below.

3. Monitoring Establishment Process

1. Define Monitoring Purpose

We launched the drawing project with known bugs for a controlled trial, aiming to continuously improve service quality through monitoring, detecting issues early across the entire business chain—from front‑end to back‑end, hardware to software, and internal to external networks.

2. Analyze Technical Architecture

After setting goals, we analyzed the service architecture and data flow to identify core business nodes and risk points, forming the basis for monitoring items.

In the drawing domain, a page can correspond to multiple views, each view may contain multiple cabinet GREP data. Views are linked to model IDs via elementId lists, making the design tool’s modelId the core data source.

3. Determine Monitoring Indicators

We decomposed business structure to define primary and secondary monitoring indicators:

Business Stability : request volume, success rate, latency, and key scenario interface metrics.

Business Correctness : data flow, calculation results, and data range conformity.

Quality Data : front‑end error request counts.

Service Data : service performance metrics.

Application Monitoring : CPU, QPM, disk usage, and database instances.

4. Determine Monitoring Methods

For each indicator we selected appropriate methods, including alert systems, monitoring platforms, log platforms, fault reporting, real‑time operational feedback, and data dashboards.

4. Monitoring Data

Below are examples of the monitoring outcomes:

Application Alerts:

Application Monitoring Dashboard:

Business Monitoring Dashboard:

CF Online Issue Statistics:

Online HTTP Inspection:

5. Monitoring Summary

The monitoring system is still evolving, but it has already helped us quickly detect and resolve many issues, such as daily releases for a major client between March 31 and April 23. After four rounds of back‑end performance optimization, the system validated improvements and supported operational promotion for all sub‑accounts.

We also addressed frequent user reports of incorrect view generation or macro data display by establishing monitoring of the view‑model relationship, ensuring consistency of model counts across groups, facades, and macros.

Overall, the monitoring system serves as a quality framework, collecting typical regression scenarios, supplementing interface test cases, and forming a closed quality loop.

MonitoringoperationsMetricsquality assuranceconstruction drawing
Qunhe Technology Quality Tech
Written by

Qunhe Technology Quality Tech

Kujiale Technology Quality

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.