Understanding HTTP/2: History, Features, and Protocol Mechanics
HTTP/2, now widely supported by browsers and major sites, replaces HTTP/1.1 by introducing a binary framing layer, header compression, stream prioritization, server push, and flow control, addressing head-of-line blocking and inefficiencies while maintaining HTTP semantics, with detailed discussion of its evolution, structure, and deployment considerations.
HTTP/2 has become widely adopted across the Internet, with most modern browsers and large websites supporting it, prompting a detailed examination of the protocol.
Problems with HTTP/1.1 that motivated HTTP/2 include head-of-line blocking—where a TCP connection cannot send a new request until the previous response finishes—and the transmission of large, often unchanged, uncompressed header fields such as User‑Agent and Cookie, which waste bandwidth.
The evolution of HTTP/2 began with Google's experimental SPDY protocol in 2009, followed by the HTTP Working Group's draft in 2012, the first implementations in 2013, and the official deprecation of SPDY in 2015, marking HTTP/2's emergence.
HTTP/2 introduces a binary framing layer between the application (HTTP) and transport (TCP/TLS) layers. All communication occurs over a single TCP connection that can carry multiple concurrent streams, each composed of one or more frames that may be sent out of order and reassembled using stream identifiers.
Key concepts include:
Frame : the smallest unit of HTTP/2 communication, with a 9‑byte header and variable payload.
Message : a logical HTTP request or response consisting of one or more frames.
Stream : a virtual channel within a TCP connection, identified by an integer; streams are bidirectional, can be prioritized, and must follow ordering rules within the same stream.
Connection : the underlying TCP connection that multiplexes many streams.
HTTP/2 adds several features:
Header compression using the HPACK algorithm (static table, dynamic table, Huffman coding) to reduce redundant header transmission.
Request prioritization allowing clients to assign weights and dependencies to streams so servers can favor critical resources like HTML over images.
Server push enabling servers to proactively send resources (e.g., CSS, images) on new streams without an explicit client request.
Flow control based on WINDOW_UPDATE frames, allowing fine‑grained control of data flow per stream and per connection.
These improvements make page loads faster and simplify front‑end optimizations such as domain sharding, sprite images, and resource concatenation, which are no longer necessary under HTTP/2.
However, HTTP/2 also inherits TCP’s head‑of‑line blocking; packet loss can stall all streams on a connection, and a broken TCP connection aborts all active streams. Additionally, the increased concurrency can raise server load and produce bursty traffic patterns.
Protocol negotiation differs for clear‑text and TLS connections: the "h2c" identifier negotiates HTTP/2 over plain TCP via the HTTP/1.1 Upgrade mechanism, while "h2" uses ALPN (Application‑Layer Protocol Negotiation) during the TLS handshake.
Support is now mature: all major browsers support HTTP/2, and servers such as Nginx, HAProxy, and many language frameworks (e.g., Go’s net/http since 1.6, server push since 1.8, h2c since 1.11) provide native implementations. Deployment options include using CDN hosting for static assets, enabling HTTP/2 directly in the application server, or terminating HTTP/2 at a reverse proxy and forwarding to HTTP/1.1 back‑ends.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.