High-Performance HTTP Pipelining: Principles, Test Methodology, and .NET Implementation
This article explains the concept of HTTP pipelining, presents a series of performance tests comparing pipe, thread‑group, and asynchronous request methods on a typical PC, analyzes the results and underlying TCP behavior, and provides a simple .NET implementation for practical use.
The article begins by defining "high performance" as the ability of a network card to send requests as fast as possible, noting that ordinary servers show noticeable latency under a single client’s load.
It introduces the principle of HTTP pipelining (pipe) and describes a practical performance test that examines data flow and underlying mechanisms, concluding with a simple implementation.
Four test approaches are outlined for a single client sending 10,000 requests: (1) single‑process/thread polling (low performance, omitted), (2) multiple threads prepared with signals (high client demand), (3) a group of threads polling simultaneously, and (4) the platform’s asynchronous send mechanism using a thread pool. The low‑performance methods are not shown.
The main experiments compare three methods: a pipe test using 100 pipelines (each sending 100 requests), a thread‑group test with 100 threads each sending 100 requests, and an asynchronous test where 10,000 requests are submitted to a thread pool. The environment is a typical home PC (i5 4‑core, 12 GB RAM, 100 Mb broadband).
Test requests target Baidu with a simple GET header. The tool used is PipeHttpRunner (download link provided). The pipe test completes in roughly 5 seconds, while the thread‑group takes about 25 seconds and the asynchronous approach exceeds one minute, with CPU near full load in the latter two cases.
Additional tests on JD, Taobao, Youku, and internal servers show similar ten‑fold performance gaps when the server is not the bottleneck, and a test against NetEase e‑commerce API processes 10,000 requests (≈326 MB response) within 30 seconds, limited mainly by network bandwidth.
The article then explains the HTTP request/response flow, contrasting traditional keep‑alive (one request per packet, waiting for response) with pipelining, which allows multiple requests to be sent without waiting for replies, often packing several requests into a single TCP segment.
Key advantages of pipelining are listed: immediate sending of subsequent requests, batching multiple requests per packet, and requiring only a few TCP connections. Drawbacks include head‑of‑line blocking and difficulty matching responses to requests, which can be mitigated by tagging requests or adopting HTTP/2 with stream identifiers.
Finally, a simple .NET implementation is provided via the MyPipeHttpHelper library and the PipeHttpRunner demo, with GitHub links for the source code.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.