How Browsers Manage TCP Connections, Persistent HTTP, Pipelining, and Multiplexing
The article explains how modern browsers handle TCP connections and HTTP requests—including persistent connections, the limits on simultaneous connections per host, the (disabled) HTTP/1.1 pipelining, SSL reuse, and how HTTP/2 multiplexing improves image loading performance.
First Question
In HTTP/1.0 the server closes the TCP connection after each response, which is costly; therefore many servers support Connection: keep-alive to reuse the same TCP connection for subsequent requests, reducing both TCP and SSL overhead. The article shows two screenshots: the first request incurs connection and SSL setup, while the second reuses the existing connection.
HTTP/1.1 formalized this behavior by making persistent connections the default unless Connection: close is sent, so a TCP connection normally stays open after a request finishes.
Second Question
A single TCP connection can carry multiple HTTP requests when the connection is kept alive.
Third Question
In HTTP/1.1 a single TCP connection can process only one request at a time; the lifetimes of two requests cannot overlap. Although the specification defines pipelining (sending multiple requests without waiting for responses), browsers disable it by default because of implementation complexity and head‑of‑line blocking.
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.
Because HTTP responses are just streams of bytes, the browser cannot match a response to a specific request without preserving order. Example requests: GET /query?q=A and GET /query?q=B illustrate this ambiguity.
Pipelining faces practical problems such as proxy incompatibility, complex implementation, and head‑of‑line blocking, so modern browsers keep it disabled.
HTTP/2 introduces multiplexing, allowing multiple HTTP requests to be interleaved over a single TCP connection. The article includes a screenshot showing parallel request and download times on the same connection.
During the HTTP/1.1 era browsers improved page load speed by (1) reusing a persistent TCP connection for sequential requests and (2) opening several parallel TCP connections to the same host.
Fourth Question
When a page is refreshed, the browser may reuse an existing SSL session if the underlying TCP connection is still alive, avoiding a new SSL handshake.
Fifth Question
Browsers limit the number of simultaneous TCP connections to a single host. Chrome, for example, allows up to six concurrent connections per host; other browsers have similar limits.
If a page contains dozens of images, the browser will open multiple TCP connections (subject to the per‑host limit) and queue additional requests until a connection becomes free. When HTTP/2 is available, the browser can use a single connection with multiplexing; otherwise it falls back to multiple connections over HTTP/1.1.
In summary, images loaded over HTTPS from the same domain are typically fetched via a persistent TCP connection that may be upgraded to HTTP/2 for multiplexed transfer; if HTTP/2 is unavailable, the browser uses several parallel connections respecting the per‑host limit.
Java Captain
Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.