Fundamentals of TCP: Role, Packet Size, Sequencing, Assembly, Slow Start, and Loss Recovery
This article explains the basic principles of the TCP protocol, covering its purpose in ensuring reliable data transmission, packet size limits, sequence numbering, how packets are reassembled by the operating system, the slow‑start mechanism with ACK handling, and methods for detecting and retransmitting lost packets.
TCP (Transmission Control Protocol) is a core Internet protocol that ensures reliable, ordered delivery of data between hosts. It operates above the IP layer and below application‑layer protocols, providing completeness and error‑recovery for transmitted packets.
Ethernet frames carry IP packets, which in turn encapsulate TCP segments. Ethernet limits payload to 1500 bytes; IP reduces this to a maximum of 1480 bytes, and TCP further reduces the usable payload to about 1400 bytes after accounting for its own header.
Each TCP segment is assigned a sequence number (SEQ). The first segment starts with a random number, and subsequent segments increment based on the payload length, allowing the receiver to reorder and detect missing data.
Reassembly of TCP segments is performed by the operating system, not the application. The OS uses the sequence numbers and ports to deliver complete data streams to the appropriate application (e.g., HTTP, FTP).
TCP employs a slow‑start algorithm to probe the network capacity safely. It begins by sending a small congestion window (typically 10 segments) and increases the window size until packet loss is detected, at which point it reduces the rate.
Each ACK (acknowledgement) packet carries the next expected sequence number and the receiver’s remaining window size. Duplicate ACKs or timeout events trigger retransmission of lost segments, ensuring data integrity.
The article concludes with references to additional network‑technology videos for deeper study of routing, subnetting, and other foundational concepts.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.