Mastering QPS, TPS, PV, UV, DAU, MAU & System Throughput Explained
This article clarifies key performance metrics such as QPS, TPS, PV, UV, DAU, MAU, concurrent users, and system throughput, explains their differences, relationships, and how they impact capacity planning, while also outlining essential performance testing concepts and evaluation methods for robust system design.
This article provides a comprehensive explanation of common performance metrics and concepts used in system design and capacity planning.
1. QPS
QPS ( Queries Per Second) measures the number of queries a server can handle per second, representing the maximum throughput of a specific query service.
2. TPS
TPS ( Transactions Per Second) counts the number of complete transactions per second, where a transaction includes a client request, server processing, and response.
3. Difference between QPS and TPS
TPS reflects the number of complete request‑response cycles per second, while QPS counts every individual query to the server, which may be multiple per transaction.
1) TPS includes:
User requests the server
Server internal processing
Server response to the user
2) QPS is similar but counts each query generated by a page request, which can be multiple.
Example: a single page load may generate three server queries, resulting in one TPS but three QPS.
4. Concurrency
Concurrency (or concurrent degree) indicates how many requests a system can handle simultaneously, reflecting its load capacity.
5. Throughput
Throughput is the number of requests processed by the system per unit time; both QPS and TPS are common quantitative indicators of throughput.
6. PV
PV (Page View) counts each page access or refresh, representing total page visits.
7. UV
UV (Unique Visitor) counts distinct users visiting a site within a day, typically deduplicated by a unique identifier.
8. DAU
DAU (Daily Active Users) measures the number of unique users who interact with a product in a single day, similar to UV but focused on activity.
9. MAU
MAU (Monthly Active Users) counts distinct users who engage with a product over a month.
10. System Throughput Evaluation
When designing a system, consider CPU, I/O, and external service latency to estimate performance. Besides QPS and concurrency, daily PV is another dimension for capacity estimation.
11. Basic Concepts and Formulas for Software Performance Testing
From a user perspective, response time—time from initiating an action to receiving the result—directly impacts perceived performance. From an administrator perspective, key metrics include response time, resource utilization, scalability, maximum supported users, bottleneck identification, and 24/7 availability.
From a developer perspective, evaluate architecture rationality, database design, code efficiency, memory usage, thread synchronization, and resource contention.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Programmer DD
A tinkering programmer and author of "Spring Cloud Microservices in Action"
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
