Fundamentals 23 min read

Demystifying the TCP/IP Model: From Layers to Handshakes

This comprehensive guide explains the TCP/IP protocol suite, detailing each layer from the physical link to the application, and covers essential protocols such as IP, ARP, ICMP, DNS, as well as connection management techniques like three‑way handshake, four‑way termination, flow control, and congestion control mechanisms.

Efficient Ops
Efficient Ops
Efficient Ops
Demystifying the TCP/IP Model: From Layers to Handshakes

1. TCP/IP Model

TCP/IP protocol model (Transmission Control Protocol/Internet Protocol) comprises a series of protocols that form the foundation of the Internet and is the core protocol suite.

The reference model divides the protocols into four layers: link layer, network layer, transport layer, and application layer. The diagram below shows the correspondence between the TCP/IP model and the OSI model.

TCP/IP protocol suite wraps data layer by layer. The topmost application layer includes familiar protocols such as HTTP and FTP. The transport layer hosts TCP and UDP. The network layer contains IP, which adds IP addresses to data for routing. The data link layer adds Ethernet headers and CRC encoding before transmission.

The process of sending data corresponds to a stack (encapsulation) on the sender side and an unstack (decapsulation) on the receiver side.

Example using HTTP illustrates the encapsulation steps.

2. Data Link Layer

The physical layer converts the binary bit stream into electrical signals or light pulses. The data link layer groups bits into frames and transmits them between neighboring nodes identified by MAC addresses.

Frame encapsulation: add source and destination MAC addresses to the network‑layer packet.

Transparent transmission: zero‑bit padding and escape characters.

Reliable transmission: rarely needed on low‑error links, but required on wireless links (WLAN).

Error detection (CRC): receiver discards frames with detected errors.

3. Network Layer

1. IP Protocol

IP is the core of the TCP/IP suite; TCP, UDP, ICMP, IGMP all use the IP packet format. IP itself is unreliable and does not provide delivery guarantees—higher‑level protocols such as TCP or UDP must handle retransmission.

1.1 IP Address

IP addresses identify hosts at the network layer, analogous to MAC addresses at the data link layer. A 32‑bit IP address is divided into network and host portions to reduce routing table size.

Class A: 0.0.0.0 ~ 127.255.255.255 Class B: 128.0.0.0 ~ 191.255.255.255 Class C: 192.0.0.0 ~ 223.255.255.255

1.2 IP Header

The TTL (Time‑to‑Live) field, an 8‑bit value, limits how many routers a packet can traverse before being discarded. Each router decrements TTL by one; when TTL reaches zero, the packet is dropped. Typical maximum TTL values are 255, though many systems use 32 or 64.

2. ARP and RARP

ARP (Address Resolution Protocol) maps an IP address to a MAC address. When a host needs to send an IP packet, it checks its ARP cache; if the mapping is missing, it broadcasts an ARP request. The host owning the IP replies with its MAC address, which the requester stores in its cache.

RARP works in the opposite direction (IP from MAC) and is rarely used today.

3. ICMP Protocol

ICMP (Internet Control Message Protocol) operates at the IP layer to report errors such as host unreachable or network unreachable. When an IP packet encounters an error, ICMP encapsulates the error information and sends it back to the source, allowing higher‑level protocols to react.

4. ping

Ping is the most famous ICMP application. It sends an ICMP Echo Request (type 8) and expects an Echo Reply (type 0) to verify reachability and measure round‑trip time.

The name “ping” comes from sonar terminology because it uses echo‑request packets to detect another host.

5. Traceroute

Traceroute discovers the path packets take to a destination by sending UDP packets with incrementally increasing TTL values. Each router that decrements TTL to zero returns an ICMP Destination Unreachable message, revealing its IP address. The process repeats until the destination is reached.

6. TCP/UDP

Both are transport‑layer protocols but differ in characteristics and use cases.

Message‑oriented (UDP) : The application defines the size of each datagram; if too large, IP fragmentation occurs, reducing efficiency.

Byte‑stream (TCP) : TCP treats data as an unstructured stream, segmenting large blocks as needed and providing reliable delivery, flow control, and congestion control.

When to use TCP?

When reliable delivery is required, such as HTTP/HTTPS, FTP, email (POP/SMTP), etc.

When to use UDP?

When low latency is more important than reliability, such as real‑time audio/video or DNS queries.

7. DNS

Domain Name System maps human‑readable domain names to IP addresses. It operates over UDP on port 53 and provides distributed name resolution.

8. TCP Connection Establishment and Termination

1. Three‑Way Handshake

TCP establishes a reliable connection using three steps: SYN, SYN‑ACK, and ACK, synchronizing sequence numbers and exchanging window sizes.

First handshake : Client sends SYN with sequence number x.

Second handshake : Server replies with SYN‑ACK (ack = x+1, seq = y).

Third handshake : Client sends ACK (ack = y+1); both sides enter ESTABLISHED state.

Why three‑way handshake?

It prevents old duplicate SYN packets from establishing unintended connections.

2. Four‑Way Termination

After data transfer, TCP closes the connection with four steps (FIN/ACK exchanges) to ensure both sides have finished sending.

The final TIME_WAIT state guarantees that delayed packets from the old connection are discarded before the socket can be reused.

Why wait 2 MSL?

To ensure all duplicate packets have expired and to allow the full‑duplex connection to close cleanly.

9. TCP Flow Control

Flow control prevents the sender from overwhelming the receiver by limiting the sender’s rate to the receiver’s advertised window (rwnd). The sliding‑window mechanism adjusts rwnd dynamically.

10. TCP Congestion Control

Congestion control adjusts the congestion window (cwnd) based on network conditions. When the network is free, cwnd grows; when congestion is detected, cwnd is reduced.

Slow Start : cwnd starts at one MSS and doubles each RTT until loss occurs or a threshold (ssthresh) is reached.

When cwnd < ssthresh, slow start is used; when cwnd > ssthresh, congestion avoidance takes over, increasing cwnd linearly (by one MSS per RTT).

2. Fast Retransmit and Fast Recovery

Fast Retransmit

When the sender receives three duplicate ACKs, it immediately retransmits the missing segment without waiting for the retransmission timer, improving throughput.

Fast Recovery

After fast retransmit, the sender sets cwnd to ssthresh (half of the cwnd at the time of loss) and enters congestion avoidance instead of resetting cwnd to 1, allowing a smoother recovery.

TCP/IPnetwork protocolscongestion controlhandshaketransport layer
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.