Key OS and Network Interview Questions Explained: Multithreading, Virtual Memory, TCP Congestion Control, and HTTP/2
This article breaks down common operating‑system and networking interview topics, covering multithreading on a single core, segmentation and paging virtual memory, page faults and swap, kernel vs user mode, HTTP status codes and protocol evolution, and TCP congestion‑control algorithms with practical examples.
Operating System
Can a single‑core CPU run multiple threads?
A single‑core CPU achieves the appearance of parallelism by rapidly context‑switching between threads. Each thread runs for a few tens or hundreds of milliseconds before the scheduler switches to another, so over a second the CPU can execute many threads concurrently.
Virtual address translation
Modern OSes use two complementary mechanisms to map a process's virtual address space to physical memory.
Segmentation : The virtual space is divided into variable‑size segments (code, data, heap, stack). A segment selector stored in a segment register indexes a segment‑table entry that contains the segment's base address, limit and privilege level. The effective address is computed as physical = base_of_segment + offset_within_segment. Segmentation simplifies logical organization but can cause external fragmentation.
Paging : The virtual space is split into fixed‑size pages (e.g., 4 KB). The virtual address is divided into a page number and a page offset. The page number indexes a page‑table entry that holds the physical page frame base address. The final physical address is physical = frame_base + offset. Paging eliminates external fragmentation at the cost of possible internal fragmentation.
Example (segmentation): virtual address = segment 3 + offset 500 → physical = base_of_segment 3 (7000) + 500 = 7500.
Paging translation consists of three steps:
Split the virtual address into page number and offset.
Look up the page table to obtain the corresponding physical frame number.
Combine the frame base address with the offset to form the physical address.
What happens when a 32‑bit process allocates 2 GB?
The malloc call reserves virtual address space only. Physical pages are allocated on first access, triggering a page‑fault interrupt. The kernel’s page‑fault handler checks for free physical memory:
If free pages exist, they are mapped to the faulting virtual page.
If memory is exhausted, the kernel may evict pages to swap space.
Swap mechanism
When RAM is insufficient, the OS writes rarely‑used pages to a designated disk area (swap). The freed RAM can be reused by active processes. When a swapped‑out page is accessed again, a page‑fault brings it back into RAM (swap‑in). Swap expands usable memory but incurs high latency due to disk I/O.
Kernel mode vs. user mode
Kernel mode : Executes with the highest privilege level, allowing direct hardware access, execution of privileged instructions, and unrestricted memory access. Kernel code and device drivers run here.
User mode : Runs with limited privileges. Applications cannot execute privileged instructions or access kernel memory directly; they request services via system calls.
Network Protocols
Common HTTP response codes
HTTP status codes are grouped into five classes:
1xx – Informational
2xx – Success (e.g., 200 OK)
3xx – Redirection (e.g., 301 Moved Permanently, 302 Found)
4xx – Client error (e.g., 404 Not Found, 405 Method Not Allowed)
5xx – Server error (e.g., 500 Internal Server Error)
Differences between HTTP/1.1 and HTTP/2
HTTP/1.1 improvements over HTTP/1.0
Persistent (keep‑alive) connections reduce connection‑setup overhead.
Pipelining allows multiple requests to be sent without waiting for each response.
Remaining limitations of HTTP/1.1
Headers are sent uncompressed; only the message body can be compressed.
Redundant header transmission wastes bandwidth.
Head‑of‑line blocking: the server must respond to requests in order, delaying later requests.
No request prioritization.
Clients must always initiate requests; servers are purely passive.
HTTP/2 enhancements
Header compression using the HPACK algorithm.
Binary framing format (frames) for efficient parsing.
Multiplexed streams allow many concurrent requests/responses over a single TCP connection.
Server push enables the server to proactively send resources (e.g., CSS) without an explicit client request.
TCP congestion control overview
TCP regulates the amount of data in flight using two variables: cwnd – congestion window (bytes or MSS units) ssthresh – slow‑start threshold
Slow start
cwnd = 1 MSS
while cwnd < ssthresh:
on each ACK: cwnd += 1 MSS # exponential growthCongestion avoidance
while cwnd >= ssthresh:
on each ACK: cwnd += MSS * (1 / cwnd) # linear growthCongestion event
Detected by a timeout or by receiving three duplicate ACKs.
Triggers retransmission of the lost segment.
Fast retransmit & fast recovery
# on three duplicate ACKs
cwnd = cwnd / 2 # halve the window
ssthresh = cwnd
cwnd = ssthresh + 3 * MSS # allow transmission of the lost segment
# after new ACKs, cwnd grows linearly againWindow size is also limited by the receiver’s advertised window, the bandwidth‑delay product, router buffer sizes, and OS/application tuning parameters.
Factors affecting TCP window size
Receiver’s advertised window.
Network bandwidth and latency (BDP).
Congestion‑control algorithm adjustments.
Router and network device buffer capacities.
Operating‑system and application configuration (e.g., net.core.rmem_max, net.ipv4.tcp_rmem).
Liangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
