Game Development 25 min read

From Stuck Frames to Smooth Play: QA’s Guide to Client Performance Testing

This article explains how QA can shift client performance testing from a reactive, ad‑hoc task to a proactive, daily practice by understanding rendering pipelines, using lightweight tools such as RenderDoc, Tracy and ETW, and applying systematic analysis to pinpoint and resolve CPU, GPU, memory and I/O bottlenecks.

NetEase LeiHuo Testing Center
NetEase LeiHuo Testing Center
NetEase LeiHuo Testing Center
From Stuck Frames to Smooth Play: QA’s Guide to Client Performance Testing

Client performance issues such as stutter, frame drops, latency and crashes are common obstacles in game QA, but vague feedback like “the game is laggy” rarely leads to solutions. Precise identification of whether the bottleneck lies in CPU load, GPU rendering, memory, or I/O is essential for QA to add real value.

From Special to Daily Testing

Historically many teams treat client performance testing as a separate, late‑stage task handled by developers, which often results in late discovery of problems, compressed optimization time, and a trade‑off between visual quality and performance. Integrating lightweight performance checks into daily functional testing—such as monitoring frame rate, CPU/GPU usage, or memory consumption—allows early detection and faster resolution.

Practical Examples

In one case a large map caused the scene to run at 30 FPS with high memory and VRAM usage. Late QA involvement forced a rushed fix that reduced texture quality. After applying HISM (hierarchical instanced static meshes), draw calls dropped dramatically, CPU load fell, and FPS rose to 50, though GPU usage then spiked, revealing the next bottleneck.

Performance Concepts and Pipeline

The rendering pipeline consists of CPU work (game logic, input, preparing draw calls), draw‑call submission, and GPU work (vertex processing, rasterization, pixel shading, output merging). Common optimizations include reducing code complexity, limiting object count, using LOD, batching, texture atlases, simplifying shaders, and minimizing overdraw.

Key Metrics

FPS (frames per second)

Jank/Stutter (frame drops)

Frame Time (ms per frame)

CPU Utilization

GPU Utilization

VRAM usage

RAM usage

Analysis Tools

RenderDoc is a graphics debugger that captures a single frame, allowing inspection of draw calls, resources, shaders and texture data. It helps identify high‑cost draw calls, overdraw, missing textures, or expensive post‑process effects.

Tracy is a CPU‑side profiler that records function‑level timings, thread activity, memory allocations and lock contention. It is useful for locating spikes in frame time, memory leaks, or multithreading issues.

ETW (Event Tracing for Windows) provides system‑wide tracing of CPU, disk, memory, network and custom events without modifying the application. It is ideal for diagnosing system‑level bottlenecks, driver stalls, or multi‑process resource contention.

How to Use the Tools

Run an inner (debug) build with matching PDB files.

For RenderDoc, place renderdoc.dll alongside the executable, enable frame capture, and open the resulting .rdc file.

For Tracy, launch tracy-profiler.exe, connect to the client, record for 30‑60 seconds around the problematic area, pause, then stop and save the trace.

For ETW, enable Fast Sampling/GPU tracing in the UI, start tracing before the issue appears, stop, and analyze the generated .etl file with Windows Performance Analyzer.

Interpreting Results

In RenderDoc, the Event Browser shows per‑draw‑call durations; high‑cost passes such as RenderOpaque , DS_Render2GBuffer , SSAO , or RenderEffect indicate where GPU time is spent. The Pipeline State view reveals which shaders and resources are used.

In Tracy, red frames in the timeline highlight frame‑time spikes. Inspect thread activity to see whether the main thread (game logic) or worker threads (resource loading) cause the delay. Memory allocation graphs expose leaks or excessive allocations.

ETW provides a holistic view of CPU, I/O and memory usage across the whole system, helping to spot external processes or driver issues that affect game performance.

Conclusion

Performance testing is more than a single “game is laggy” report; it requires systematic, daily monitoring, a solid understanding of the rendering pipeline, and basic proficiency with tools like RenderDoc, Tracy and ETW. By adopting these practices, QA can move from passive symptom reporting to active bottleneck identification, accelerating optimization cycles and improving overall game quality.

ETWQAGameRenderDocTracy
NetEase LeiHuo Testing Center
Written by

NetEase LeiHuo Testing Center

LeiHuo Testing Center provides high-quality, efficient QA services, striving to become a leading testing team in China.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.