Fundamentals 21 min read

Comprehensive Overview of HTTP/2: Connection Setup, Frames & Streams, HPACK Compression, Server Push, Flow Control, and Open Issues

The article thoroughly explains HTTP/2, detailing its optional clear‑text or TLS connection setup, multiplexed frames and streams, HPACK header compression, server‑push mechanism, application‑layer flow control, and the protocol’s lingering challenges such as TLS handshake latency and TCP head‑of‑line blocking that HTTP/3 seeks to resolve.

vivo Internet Technology
vivo Internet Technology
vivo Internet Technology
Comprehensive Overview of HTTP/2: Connection Setup, Frames & Streams, HPACK Compression, Server Push, Flow Control, and Open Issues

This article provides a detailed introduction to the HTTP/2 protocol, covering its connection establishment, the relationship between frames and streams, header compression with HPACK, the server‑push feature, flow‑control mechanisms, and the challenges the protocol still faces.

1. HTTP/2 Connection Establishment

Contrary to common belief, HTTP/2 does not require TLS/SSL; a plain TCP connection (referred to as h2c ) can also be used. Modern browsers, however, only support the TLS‑based variant ( h2 ) by default.

Example of capturing an h2c session with tcpdump :

tcpdump -i eth0 port 80 and host nghttp2.org -w h2c.pcap &

Accessing the site over clear‑text HTTP/2 with curl :

curl http://nghttp2.org --http2 -v

When TLS is involved, the client first performs an HTTP/1.1 Upgrade or uses ALPN during the TLS handshake to negotiate the h2 protocol. The article shows Wireshark screenshots of the TLS ClientHello/ServerHello exchange, the “magic” SETTINGS frame, and the final SETTINGS exchange that confirms the protocol switch.

2. Frames and Streams Relationship

HTTP/2 introduces the concept of streams (independent, bidirectional sequences of frames) that are multiplexed over a single TCP connection. Stream IDs are odd for client‑initiated streams and even for server‑initiated streams (used for server push). Frames within a stream must be ordered, but streams themselves can be interleaved arbitrarily.

The SETTINGS frame advertises the maximum number of concurrent streams (e.g., 1000 advertised by the client, 128 accepted by the server). Images illustrate stream IDs, frame ordering, and the multiplexing layout.

3. HPACK Header Compression

HTTP/2 reduces header overhead by compressing headers with the HPACK algorithm, which uses a static table, a dynamic table, and Huffman coding. The static table maps common header fields (e.g., :method , user-agent ) to small integer indexes.

Installation of the HPACK client tools:

apt-get install nghttp2-client

Examples show how a header like method: GET is encoded as a single byte 0x82 , how a header with a static‑table key but a dynamic value is encoded (e.g., user-agent with a 84‑byte value), and how completely new headers are encoded.

Static and dynamic dictionaries, together with Huffman coding, can save roughly 25 % or more of header traffic.

4. Server Push Capability

HTTP/2 allows the server to push resources (CSS, JS, etc.) before the client requests them, reducing perceived latency. A Go example using the Echo framework demonstrates how to push /app.css when serving /h2.html :

package main

import (
    "fmt"
    "net/http"
    "github.com/labstack/echo"
)

func main() {
    e := echo.New()
    e.Static("/", "html")
    e.GET("/request", func(c echo.Context) error {
        req := c.Request()
        format := `
Protocol: %s
Host: %s
Remote Address: %s
Method: %s
Path: %s
`
        return c.HTML(http.StatusOK, fmt.Sprintf(format, req.Proto, req.Host, req.RemoteAddr, req.Method, req.URL.Path))
    })
    e.GET("/h2.html", func(c echo.Context) (err error) {
        if pusher, ok := c.Response().Writer.(http.Pusher); ok {
            if err = pusher.Push("/app.css", nil); err != nil {
                println("error push")
                return
            }
        }
        return c.File("html/h2.html")
    })
    e.StartTLS(":1323", "cert.pem", "key.pem")
}

Network panel screenshots confirm that the CSS file is received via a PUSH_PROMISE and DATA frames without an explicit client request.

5. Flow Control

Although TCP already implements flow control, HTTP/2 adds its own layer because multiplexed streams can collectively exceed the TCP window, causing congestion. The protocol defines:

Both client and server can apply flow control.

Each endpoint can set its own limits independently.

Only DATA frames are subject to flow control; HEADERS and PUSH_PROMISE frames are not.

Flow‑control windows are scoped to the TCP connection, not propagated through intermediaries.

Wireshark captures of WINDOW_UPDATE frames illustrate how the window size is adjusted during a session.

6. Remaining Issues

HTTP/2 still suffers from handshake latency (especially with TLS) and head‑of‑line blocking at the TCP layer. QUIC (the basis of HTTP/3) moves many TCP functions to user space over UDP, eliminating head‑of‑line blocking and allowing connection migration. Screenshots show Chrome’s QUIC toggle and examples of QUIC‑enabled sites.

In very lossy networks, a single TCP connection can become a bottleneck, making HTTP/1.x with multiple parallel connections sometimes perform better.

Overall, HTTP/2 brings significant performance improvements through multiplexing, header compression, server push, and application‑layer flow control, but its reliance on TCP still imposes limits that HTTP/3 aims to overcome.

web performanceTLSProtocolHTTP/2flow controlServer Pushhpack
vivo Internet Technology
Written by

vivo Internet Technology

Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.