Databases 24 min read

MySQL Middleware Performance Testing I – Common Mistakes, Practical Methods, and Distributed Transactions

This presentation details how to correctly benchmark MySQL middleware performance, exposing common pitfalls, describing practical testing methodologies, emphasizing the need to observe both middleware and actual database pressure, and discussing distributed transaction considerations and metric selection for reliable results.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
MySQL Middleware Performance Testing I – Common Mistakes, Practical Methods, and Distributed Transactions

The talk, originally delivered by Huang Yan at the MySQL Technical Salon in Chengdu (July 7, 2018), introduces the speaker’s experience as R&D Director at iKang, focusing on distributed database technologies and MySQL middleware.

Agenda: (1) Common (incorrect) performance‑testing approaches, (2) Practical methods used by the team, (3) Distributed‑transaction topics.

Common Mistakes – Treating the middleware itself as the sole observation target, ignoring the downstream database pressure, and assuming higher QPS always means better performance. Real‑world examples show how TLS differences and varying concurrency can mislead conclusions.

Correct Observation Object – Middleware + connection attributes + the actual SQL executed on the database. Visual flow diagrams illustrate how pressure propagates from client → middleware → database → storage, highlighting that the red arrows (observed points) often capture aggregate system pressure rather than middleware‑specific load.

Example SQL used in tests: prepare ps from ‘…’; select * from a limit 1; select * from b limit 1;

These statements expose context‑transfer behavior: a prepare may be routed to a different backend than subsequent select statements, affecting performance and fairness.

Metrics Selection – Throughput (TPS/QPS) vs. latency (response time, percentiles). High‑throughput systems may suffer latency spikes; business requirements dictate which metric to prioritize.

Observability Tools – eBPF/SystemTap for OS‑level tracing, middleware‑provided metrics, and the USE method (Utilization, Saturation, Errors) for resource analysis. Sample eBPF scripts can plot MySQL latency distributions.

Tool Calibration – BenchmarkSQL (Java TPCC) can dead‑lock under RR isolation; calibrating tools with multiple workloads and verifying the actual pressure they generate is essential.

Distributed Transactions – Discusses ACID anomalies, lock protocols (S2PL/SS2PL), and the need to test both correctness and performance under fault injection (CPU, memory, disk, network, process failures).

Overall, the presentation stresses a scientific approach: observe, identify bottlenecks, solve them, and repeat until no further improvements are possible.

ObservabilitymiddlewarePerformance TestingMySQLdistributed transactionssysbenchUSE method
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.