Backend Development 38 min read

Comprehensive Tencent Backend Development Interview Review: Java, OS, Networking, Algorithms, Databases and System Design

This article provides a detailed walkthrough of a Tencent backend interview covering Java thread models, HashMap internals, garbage collection, operating‑system concepts, process scheduling, IPC mechanisms, I/O multiplexing, virtual memory, networking protocols, Redis data types, MySQL B+‑tree characteristics, and a classic logic puzzle, offering both theory and practical insights.

IT Services Circle
IT Services Circle
IT Services Circle
Comprehensive Tencent Backend Development Interview Review: Java, OS, Networking, Algorithms, Databases and System Design

Tencent interview style emphasizes solid computer fundamentals, especially operating systems and networking, so candidates should focus on these areas when preparing for different companies.

The interview lasted two hours, with one hour dedicated to algorithm problems and another hour to core fundamentals, and the interviewers were supportive and guided the candidate throughout.

Key topics covered include:

Java: HashMap, garbage‑collection algorithms, thread model

Operating System: user vs kernel mode, process scheduling, process‑thread differences, virtual memory, address space layout

Networking: TCP three‑way handshake, congestion control, HTTP vs HTTPS, select/poll/epoll

Redis: common data structures, Zset underlying implementation

MySQL: B+‑tree characteristics, reducing I/O by caching

Additional: a classic 1000‑bottle‑poison puzzle and a short algorithm list

Java

Are Java threads the same as OS threads?

Java creates threads by calling pthread_create , so each Java thread maps 1:1 to a native OS thread.

How does HashMap work internally?

When put is called, the key’s hashCode determines the bucket; the map resizes when the load factor is exceeded. Retrieval uses hashCode to locate the bucket and then equals() to confirm the entry. Collisions are stored in a linked list, which turns into a red‑black tree in Java 8 when a bucket exceeds eight entries.

How to determine if an object is garbage?

Java’s GC uses reachability analysis starting from GC roots; objects not reachable are considered garbage. Reference counting is not used because it cannot handle cyclic references.

Reference counting (not used by Java): increments on new references and decrements when references are released; object is garbage when the count reaches zero.

Reachability analysis: traverses the object graph from GC roots and marks all reachable objects; unmarked objects are garbage.

If a HashMap stores one million objects, what GC issues may arise?

Scanning a huge number of objects with reachability analysis can significantly increase GC pause time.

Operating System

User mode vs kernel mode

Access permissions: user mode has limited resources; kernel mode has full system access.

CPU instruction set: user mode executes non‑privileged instructions; kernel mode can execute privileged instructions.

Interrupt/exception handling: user mode transfers control to kernel handlers; kernel mode handles them directly.

Memory protection: user processes can only access their own memory; kernel can access all memory.

Security: user mode processes are isolated; kernel mode requires strict protection.

Process scheduling algorithms

01 First‑Come‑First‑Serve (FCFS)

Non‑preemptive; the first process in the ready queue runs until it blocks or finishes.

02 Shortest Job First (SJF)

Prefers the process with the shortest execution time, improving throughput but can cause starvation for long jobs.

03 Highest Response Ratio Next (HRRN)

Calculates response ratio = (waiting time + service time) / service time; balances short and long jobs.

04 Round Robin (RR)

Each process receives a time slice (typically 20 ms–50 ms); after the slice expires, the CPU is given to the next process.

05 Highest Priority First (HPF)

Chooses the ready‑queue process with the highest priority; can be preemptive or non‑preemptive.

06 Multilevel Feedback Queue (MLFQ)

Combines RR and HPF: multiple queues with decreasing priority, shorter time slices for higher‑priority queues, and feedback that moves a process to a lower queue if it exceeds its time slice.

Inter‑process communication mechanisms

Linux provides several IPC methods:

Anonymous pipe: unidirectional, exists only in memory, usable between parent‑child processes.

Named pipe (FIFO): appears as a file in the filesystem, usable between unrelated processes.

Message queue: stores typed messages in kernel space, avoiding raw byte streams.

Shared memory: a fast IPC method where processes map the same memory region.

Semaphores: protect shared resources, providing mutual exclusion and synchronization.

Signals: asynchronous notifications from kernel to processes (e.g., SIGKILL, SIGSTOP).

Sockets: for communication across hosts or locally, supporting TCP and UDP.

IO multiplexing: select, poll, epoll

Select copies the entire descriptor set to kernel space, scans it linearly, and copies the result back—costly for large numbers of sockets.

Poll replaces the fixed‑size bitmap with a dynamic array, removing the 1024‑descriptor limit but still requiring linear scans.

epoll uses a kernel‑side red‑black tree to track watched descriptors and an event‑driven list for ready sockets, achieving O(log n) updates and O(1) wait, solving the C10K problem.

Why does an OS need virtual memory?

Without virtual memory, processes would share the same physical addresses, causing conflicts and crashes. Virtual addresses isolate each process, and the OS maps them to physical memory transparently.

Process vs thread differences

Resource allocation: processes have independent address spaces and file descriptors; threads share the process’s resources.

Scheduling and context switch: process switches involve kernel‑mode transitions; thread switches can be done in user mode with lower overhead.

Concurrency: processes communicate via IPC; threads communicate via shared memory.

What resides in a process address space?

The user space is divided (low to high) into code, data, BSS, heap, memory‑mapped region, and stack (default 8 MB). A reserved region prevents accidental access to low, invalid addresses.

What context is saved during a thread switch?

Register context (general registers, instruction pointer, stack pointer, etc.).

Stack pointer and stack frames.

Thread‑specific metadata such as state and scheduling priority.

Coroutine vs thread differences

Scheduling: OS schedules threads; coroutines are scheduled by the program or a coroutine scheduler.

Concurrency: threads run in parallel on multiple cores; coroutines run cooperatively within a single thread.

Memory overhead: threads require kernel stacks and control blocks; coroutines need only a small context.

Synchronization: threads use locks/condition variables; coroutines often use lightweight channels.

Network

TCP three‑way handshake

Client sends SYN with initial sequence number.

Server replies with SYN‑ACK, acknowledging the client’s sequence and providing its own.

Client sends ACK, completing the handshake; both sides enter ESTABLISHED state.

TCP congestion‑control process

The four phases are Slow Start, Congestion Avoidance, Congestion Occurrence, and Fast Recovery.

1. Slow Start

cwnd starts at 1 MSS and doubles each RTT until it reaches the slow‑start threshold (ssthresh).

2. Congestion Avoidance

After cwnd ≥ ssthresh, cwnd grows linearly (cwnd += 1/cwnd per ACK).

3. Congestion Occurrence

On timeout: ssthresh = cwnd/2, cwnd = 1 (restart Slow Start). On three duplicate ACKs (fast retransmit): cwnd = cwnd/2, ssthresh = cwnd, then enter Fast Recovery.

4. Fast Recovery

cwnd = ssthresh + 3, retransmit the lost segment, increase cwnd by 1 for each additional duplicate ACK, and on new ACK set cwnd = ssthresh.

HTTP vs HTTPS

HTTPS adds SSL/TLS encryption between TCP and HTTP.

HTTPS requires an extra handshake after TCP’s three‑way handshake.

Default ports: 80 for HTTP, 443 for HTTPS.

HTTPS uses certificates issued by a CA to verify server identity.

HTTPS encryption process (TLS handshake)

ClientHello – client sends supported TLS version, random ClientRandom , and cipher suites.

ServerHello – server selects version, sends ServerRandom , chosen cipher suite, and its certificate.

Client verifies the certificate, extracts the server’s public key, encrypts a pre‑master secret with it, and sends it along with a handshake‑finished message.

Server decrypts the pre‑master secret , both sides derive the session key, and exchange Finished messages.

Redis

Common Redis data structures

String, Hash, List, Set, Zset, plus newer types: BitMap, HyperLogLog, GEO, Stream.

Zset underlying data structure

Small Zsets (≤128 elements and each element ≤64 bytes) use a compressed list; larger ones use a skiplist. In Redis 7.0 the compressed list has been replaced by the listpack structure.

MySQL

Characteristics of B+‑tree

B+‑tree stores actual records only in leaf nodes; internal nodes contain only keys, forming an ordered linked list of leaves. This makes the tree shallower, reduces I/O, speeds up range queries, and simplifies insert/delete operations.

How to avoid three I/O operations?

Cache B+‑tree nodes in memory so subsequent lookups can be performed without disk access.

Intelligence Puzzle

To identify a single poisonous bottle among 1000 using a 24‑hour test, 10 mice are sufficient because 10 bits (2¹⁰ = 1024) can encode all bottles; each mouse drinks from bottles whose binary representation has a 1 in its position.

Algorithms

Binary search (2 problems)

Shuffle algorithm

Maximum subarray sum

JavaDatabaseOperating SystemAlgorithmsNetworkingBackend Interview
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.