Backend Development 17 min read

Comprehensive Backend Development Interview Notes: OS Memory, Concurrency, Networking, Databases, Redis, Kafka, and Docker

This article compiles detailed backend interview notes covering salary expectations, OS stack vs heap differences, process and thread concepts, DNS resolution, ping protocol, MySQL indexing and locking mechanisms, Redis performance characteristics, Kafka throughput optimizations, and Docker's underlying implementation.

IT Services Circle
IT Services Circle
IT Services Circle
Comprehensive Backend Development Interview Notes: OS Memory, Concurrency, Networking, Databases, Redis, Kafka, and Docker

Hello, I am Xiao Lin. Tencent Cloud Intelligence is a subsidiary of Tencent focusing on cloud-related projects, with offices in Xi'an, Changsha, and Wuhan.

The 2025 Tencent Cloud Intelligence campus recruitment offers for development positions range from 12k to 14.5k base salary (16 months) plus housing subsidies and signing bonuses, resulting in total annual packages of 22–24w for regular offers and 26–27w for special offers.

Operating System

Stack vs. Heap

Allocation : Heap is dynamically allocated by the programmer; stack is statically allocated by the compiler.

Memory Management : Heap requires manual allocation and deallocation, risking leaks; stack is automatically managed with LIFO semantics.

Size and Speed : Heap is larger but slower to allocate; stack is smaller and faster.

Why Temporary Variables on Stack and Objects on Heap?

Stack variables have simple lifecycle management and fast access, while heap objects support dynamic size, sharing, and persistence beyond the scope of a function.

Why Not Reverse?

Stack size is limited; large or dynamic objects may cause overflow.

Placing objects on the stack would bind their lifetime to a scope, complicating sharing and persistence.

Frequent large allocations on the stack can cause fragmentation and performance degradation.

Process vs. Thread

Resource Ownership : Processes have independent memory and resources; threads share a process's memory.

Scheduling and Switching : Process scheduling incurs higher overhead; thread switching is lighter because only registers and stack need to be saved.

Stability and Security : A crash in one process does not affect others, while a thread crash can bring down the whole process.

Thread vs. Coroutine

Process : OS‑level resource allocation, isolated memory, high context‑switch cost.

Thread : Shares process memory, low switch cost, but requires synchronization.

Coroutine : User‑level lightweight threads, minimal switch cost, but scheduling is manual and programming model is more complex.

Process Switch vs. Thread Switch Speed

Thread switching is faster because it does not require changing the address space, only the stack and program counter.

Network

DNS Workflow

Client sends a DNS query to the local DNS server.

If cached, the local server returns the IP; otherwise it queries the root server.

The root server points to the .com TLD server.

The TLD server returns the authoritative server for the domain.

The authoritative server provides the final IP address.

The local server returns the IP to the client.

Protocol Used for DNS Queries

DNS primarily uses UDP because it offers low latency, simplicity, and lightweight transmission suitable for short request/response messages.

How UDP Reliability Is Handled in DNS

DNS tolerates UDP loss by employing timeout retransmission, retries, and caching mechanisms.

Why Ping Uses ICMP Instead of UDP

ping uses ICMP because ICMP is designed for network diagnostics and provides built‑in error reporting, whereas UDP lacks such mechanisms.

MySQL

Why MySQL Indexes Use B+ Trees

B+Tree vs. B‑Tree : Only leaf nodes store data, enabling smaller nodes and range scans via linked leaves.

B+Tree vs. Binary Tree : Higher fan‑out reduces tree height, leading to fewer disk I/O operations.

B+Tree vs. Hash : B+Tree supports both equality and range queries, unlike hash indexes.

MySQL Lock Types

Locks are categorized into global, table‑level, and row‑level locks.

Global Lock : FLUSH TABLES WITH READ LOCK makes the entire database read‑only, used for full‑database backups.

Table‑Level Locks : Include explicit table locks, metadata locks (MDL), and intention locks.

Row‑Level Locks : InnoDB supports row locks (S and X), gap locks, and next‑key (record + gap) locks.

Redis

Redis achieves high throughput (up to 100 k ops/s) because most operations run in memory, it uses a single‑threaded event loop with I/O multiplexing (select/epoll), and avoids multithread contention.

Kafka

Kafka’s massive throughput stems from sequential disk writes, batch processing, zero‑copy transmission, and optional compression.

Docker

Docker isolates containers using Linux namespaces for view isolation and cgroups for resource limitation.

Algorithm Question

Example problem: interval merging.

backendDockerConcurrencyRedisInterviewdatabasesnetworkingOS
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.