Backend Development 10 min read

Performance Testing and Scalability Evaluation of a Miop‑Based Push Service

This report details the design, deployment, and extensive performance testing of a Miop protocol‑based push service, describing the test environment, methodology, multi‑stage load tests up to one million concurrent connections, observed metrics, encountered issues, and recommendations for ensuring stability and scalability.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Performance Testing and Scalability Evaluation of a Miop‑Based Push Service

The document introduces a push service built on the Miop (Message I/O Protocol) protocol, a TCP‑based communication protocol used between mobile clients and the message push server. It explains that the service supports massive mobile traffic and requires validation of its capacity and stability under high concurrency.

Business Scenario – Devices send a Bind message via Miop to establish a connection, allowing the server to mark clients online, store state in memory, synchronize status to Redis, and manage heartbeats, timeouts, and data push.

Test Objectives – Verify the single‑machine link limit (target 1 million connections) and assess Redis degradation handling under concurrent load.

Test Preparation – The environment mirrors production as closely as possible: a dedicated push server with Redis cluster, 17 physical client machines each configured to support 60 000 users (adjusted socket range via echo 1024 65535 > /proc/sys/net/ipv4/ip_local_port_range ), and the use of the high‑performance wrk tool (extended with Miop support) to generate long‑living connections.

Example command to start a client: ./wrk miop://$HOSTNAME:pk@serverIp:serverPort/ -t 12 -c 60000 --latency -d 120m --timeout 15 -s delay.lua

Test Case Analysis – The test proceeds in four stages: (1) Link‑limit test, gradually increasing online clients to observe memory usage and connection growth; (2) Stability test, running sustained loads to monitor error rates and resource usage; (3) Concurrency test, scaling connections from 100 k to 1 M to compare latency, error rates, and server metrics; (4) Exception test, evaluating the impact of frequent online/offline operations, Redis reconnections, and server fault‑tolerance.

Test Execution – Link‑limit Test – Clients are launched sequentially, observing that 50 % of responses are around 23.20 ms. System metrics (CPU, memory, NIC I/O) remain stable until the server approaches ~1 M connections, after which TCP queue overflow and connection timeouts appear. A script ( nohup ./tcp.sh 2>&1 > /dev/null & ) records per‑second connection growth.

Test Execution – Concurrency Test – Results show latency increasing with load: 100 k connections (50 % latency 407 µs), 200 k (585 µs, 6 692 timeouts), 600 k (significant latency rise and more timeouts). TCP connection growth peaks at 10‑30 k per second; beyond ~20 k concurrent connections the server begins dropping or timing out connections, indicating the need to enlarge the TCP connection pool.

Stability Test – Long‑duration runs (2 hours) at various client scales show proportional increases in NIC traffic and disk I/O, while CPU and memory usage stay relatively flat. Redis reconnection tests confirm state synchronization, with memory‑state and DB‑state query interfaces used for consistency checks.

Exception Cases – (1) Redis disconnection causing client‑side connection drops; (2) Inconsistent state between Redis and DB under heavy load; (3) Server hitting file descriptor limits, leading to massive connection timeouts. Adjusting system limits and improving server‑side connection handling mitigates these issues.

Conclusion – The key factor in long‑connection pressure testing is TCP handling. By applying varied load patterns and monitoring detailed metrics, hidden performance bottlenecks and stability issues can be uncovered, enabling targeted optimizations to improve the push service’s reliability and scalability.

BackendConcurrencyRedisPerformance Testingpush serviceMiop
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.