Master the Basics: 19 Essential TCP/IP and HTTP Interview Questions Explained
This comprehensive guide answers 19 core networking interview questions, covering the TCP/IP five‑layer model, HTTP fundamentals, GET vs POST, ping, status codes, differences among HTTP/1.0, 1.1, 2, 3, HTTPS, TCP connection handshake and teardown, sliding windows, flow control, half‑ and full‑connection queues, packet framing, and the browser request lifecycle.
1. TCP/IP Network Model Layers
The TCP/IP model consists of five layers:
Application layer : the layer we interact with directly, such as phones and computers.
Transport layer : provides network support for the application layer; it uses ports to differentiate multiple applications on a device, and includes TCP and UDP protocols.
Network layer : responsible for data transmission between devices, primarily using the IP protocol and IP addresses for device identification.
Data link layer : uses MAC addresses to uniquely identify network interfaces and provides link‑level transmission services for the network layer.
Physical layer : converts data into electrical signals for transmission over physical media, serving the data link layer.
2. Introduction to HTTP Protocol
HTTP is a TCP‑based hypertext transfer protocol that follows a simple request‑response model, defining how clients communicate with servers.
It enables point‑to‑point communication and organizes information using hyperlinks, such as HTML pages with embedded images and videos.
3. GET vs POST Differences
Aspect
GET
POST
Data transmission
Retrieves data from the server
Submits data to the server
Length limit
Limited by URL length (≈2048 characters)
No practical limit
Data type
ASCII only
Any type
Security
Data appears in URL, less secure
Data not stored in URL or browser history
Visibility
Visible in URL
Hidden
Bookmarkable
Yes
No
History
Saved in browser history
Not saved
Cacheable
Can be cached
Not cached
4. Purpose of PING
PING tests whether a connection can be established between two hosts by sending ICMP echo‑request packets and measuring round‑trip time and packet loss.
5. Common HTTP Status Codes
Code Class
Description
1xx
Informational – request received, continue processing
2xx
Success – request processed successfully
3xx
Redirection – further action needed to complete request
4xx
Client error – request malformed or cannot be fulfilled
5xx
Server error – server failed to fulfill a valid request
6. HTTP/1.1 vs HTTP/1.0
Persistent connections : HTTP/1.1 supports long‑lived connections, reducing the overhead of repeated three‑way handshakes.
Pipelining : HTTP/1.1 allows multiple requests to be sent without waiting for each response.
Range requests : HTTP/1.1 introduces the Range header for resumable downloads.
Host header : HTTP/1.1 includes the Host field to support virtual hosting.
Cache control : HTTP/1.1 adds more cache‑validation headers such as ETag, If‑Modified‑Since, etc.
Additional status codes : HTTP/1.1 defines 24 new error codes, e.g., 410 Gone.
7. HTTPS vs HTTP
SSL/TLS encryption : HTTPS adds SSL/TLS between TCP and HTTP, encrypting data in transit.
Connection establishment : HTTPS requires an additional TLS handshake after the TCP three‑way handshake.
Port numbers : HTTP uses port 80, HTTPS uses port 443.
CA certificates : HTTPS requires a digital certificate from a Certificate Authority to verify server identity.
8. HTTP/2 vs HTTP/1.1
Header compression : Identical headers are compressed to reduce redundancy.
Binary framing : HTTP/2 uses a binary format for both headers and payload.
Multiplexed streams : Multiple concurrent streams share a single TCP connection, each with its own priority.
IO multiplexing : The server can interleave responses from different streams to improve latency.
Server push : Servers can proactively send resources to the client.
9. HTTP/3 vs HTTP/2
Transport protocol : HTTP/2 runs over TCP, while HTTP/3 runs over UDP.
QUIC : HTTP/3 introduces the QUIC protocol to provide reliable, low‑latency transport.
Handshake count : HTTP/2 requires a total of six handshakes (TCP three‑way + TLS three‑way); HTTP/3 reduces this to three QUIC handshakes.
10. TCP Connection Establishment (Three‑Way Handshake)
First handshake: Client sends SYN with initial sequence number seq=x and enters SYN‑SENT state.
Second handshake: Server replies with SYN+ACK (SYN=1, ACK=1, ack=x+1) and its own sequence seq=y, entering SYN‑RCVD state.
Third handshake: Client sends ACK (ACK=1, ack=y+1) and both sides reach ESTABLISHED state.
11. Why Three Handshakes?
Prevents stale SYN packets from establishing incorrect connections during network congestion.
Three handshakes are the theoretical minimum to establish a reliable connection.
Synchronizes initial sequence numbers for both sides.
12. TCP Connection Termination (Four‑Way Handshake)
First: Client sends FIN (FIN=1, seq=u) and enters FIN‑WAIT‑1.
Second: Server acknowledges with ACK (ack=u+1, seq=v) and enters CLOSE‑WAIT.
Third: Client receives ACK, moves to FIN‑WAIT‑2, waiting for server's FIN.
Fourth: Server sends its FIN (FIN=1, seq=w); client ACKs (ACK=1, ack=w+1) and enters TIME‑WAIT, after which the server moves to CLOSED.
13. Why Wait 2 MSL After the Fourth Handshake?
The 2 MSL timer starts when the client receives the server’s FIN and sends the final ACK. If the ACK is lost, the server will retransmit FIN, resetting the timer. Waiting 2 MSL ensures that all stray packets from the old connection disappear and that the final ACK is reliably received.
14. Why Four Handshakes?
TCP is full‑duplex: after the client’s FIN, the server may still have data to send. The fourth handshake ensures both directions have finished transmitting data before the connection is fully closed.
15. What Is a TCP Sliding Window?
A sliding window allows the sender to transmit multiple segments before receiving individual ACKs, improving efficiency. The window size is determined by the receiver and indicates how many bytes can be sent without waiting for acknowledgment.
16. Flow Control When the Receiver Is Overloaded
If the receiver cannot process incoming data quickly enough, it advertises a reduced window size, informing the sender of the remaining buffer capacity. The sender then limits its transmission to the advertised size, preventing buffer overflow and packet loss.
Example: Receiver window 200 bytes, sender transmits 100 bytes, leaving 100 bytes. If the OS shrinks the buffer to 50 bytes, the receiver will advertise 50 bytes, but the sender may still send 100 bytes, causing 30 bytes to be lost. TCP forbids simultaneous buffer reduction and window shrinkage.
17. TCP Half‑Connection and Full‑Connection Queues
When a server receives a SYN, the connection is placed in the half‑connection (SYN) queue. After the final ACK, the connection moves to the full‑connection (accept) queue, awaiting the application’s accept call. Both queues have size limits; excess connections are dropped or reset.
18. TCP Packet Sticking (Sticky Packets) and Splitting
Sticky packets occur when multiple small messages are coalesced into a larger TCP segment, often because the sent data is smaller than the TCP buffer or the receiver reads data slowly. Splitting occurs when a large message exceeds the maximum segment size or the send buffer, causing it to be divided.
Solutions include:
Prefix each packet with a length header so the receiver knows the exact packet size.
Use fixed‑length packets.
Insert a unique delimiter between packets.
19. What Happens After Pressing Enter in the Browser Address Bar?
Parse the URL and generate an HTTP request.
Query DNS to resolve the domain to an IP address (using cache if available).
Establish a TCP connection to the server (three‑way handshake).
Process the TCP packets and parse the HTTP request on the server.
Server sends an HTTP response.
Browser receives the response, renders the page, and displays it.
NiuNiu MaTe
Joined Tencent (nicknamed "Goose Factory") through campus recruitment at a second‑tier university. Career path: Tencent → foreign firm → ByteDance → Tencent. Started as an interviewer at the foreign firm and hopes to help others.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
