Fundamentals 13 min read

Understanding HTTP over TCP: Connection Process, Handshake, and Management

This article explains how HTTP relies on TCP/IP, describes the browser's steps to open a TCP connection, details TCP segment and IP packet structures, outlines socket API calls, and discusses TCP handshake, slow start, and HTTP connection techniques such as parallel, persistent, and pipelined connections.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
Understanding HTTP over TCP: Connection Process, Handshake, and Management

Previously I wrote an article about HTTP features, messages, and request methods; this follow‑up focuses on interview‑style questions about HTTP, starting with how HTTP uses TCP connections.

1. How HTTP uses TCP connections

Almost all HTTP traffic runs over TCP/IP. A client opens a TCP connection to a server anywhere on the Internet; once established, packets are reliably delivered without loss, damage, or reordering. If the network or a computer crashes, the connection is broken and both sides are notified.

When a browser receives a URL it performs the following steps:

Parse the hostname.

Resolve the hostname to an IP address.

Obtain the port number.

Initiate a connection to the IP address and port.

Send an HTTP GET request.

Read the HTTP response.

Close the connection.

1.1 Basic knowledge of TCP

TCP is a reliable data pipe

TCP delivers HTTP data in order and without errors. Bytes written at one end appear at the other end in the same order.

TCP streams are segmented and carried by IP packets

TCP data is broken into segments, each wrapped in an IP packet. Thus HTTP sits at the top of the stack: HTTP over TCP over IP . HTTPS inserts a TLS/SSL encryption layer between HTTP and TCP.

Each TCP segment is carried by an IP packet, which contains:

An IP header (typically 20 bytes).

A TCP segment header (typically 20 bytes).

A TCP data block (0 bytes or more).

The IP header holds source/destination addresses and length; the TCP header holds ports, flags, and sequencing information.

Keeping a TCP connection alive

Multiple TCP connections can be open simultaneously, identified by a 4‑tuple: source IP address , source port , destination IP address , destination port . No two connections can have the exact same 4‑tuple.

TCP sockets

Operating systems provide socket APIs to create and manage TCP endpoints. A representative subset of the API is shown below:

Socket API Call

Description

s = socket()

Create a new, unnamed, unconnected socket

bind(s, ...)

Assign a local port and interface to the socket

connect(s, ...)

Establish a connection from the local socket to a remote host and port

listen(s, ...)

Mark a local socket as willing to accept connections

s2 = accept(s)

Wait for an incoming connection on the local port

The socket API hides the low‑level handshake and segmentation details, allowing applications to read and write data streams.

2. TCP connection handshake

The three‑step handshake works as follows (illustrated in the diagram):

The client sends a SYN segment to request a connection.

The server replies with a SYN‑ACK segment, acknowledging the request.

The client sends an ACK segment, completing the connection.

These packets are managed by the TCP/IP stack and invisible to HTTP programmers; only the resulting latency is observable.

TCP slow start

TCP gradually increases its transmission rate after a connection is established. Each successfully acknowledged segment grants permission to send two more segments, doubling the congestion window each round‑trip until the network capacity is reached.

3. HTTP connection handling

HTTP 1.0 opened a new TCP connection for each request, while HTTP 1.1 introduced three techniques to improve performance:

Parallel connections : multiple TCP connections used concurrently.

Persistent connections (keep‑alive) : reuse an existing TCP connection for multiple requests.

Pipelined connections : send multiple HTTP requests without waiting for each response.

Parallel connections reduce overall latency by overlapping transfers; persistent connections avoid the overhead of repeatedly opening and closing TCP sockets; pipelining allows several requests to be queued on a single persistent connection, which is especially beneficial on high‑latency networks.

4. References

《图解 HTTP》 (Illustrated HTTP)

《HTTP 权威指南》 (HTTP: The Definitive Guide)

“I strongly recommend the book *Illustrated HTTP*; its diagrams make the concepts very clear.”

— I am Yi, a coder pushing forward despite setbacks.

TCPHTTPnetworkingConnection Managementweb fundamentals
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.