Fundamentals 26 min read

Embedded Systems Interview Guide: Linux Thread Scheduling, STM32 Configuration, Synchronization, and Debugging

This article provides a comprehensive interview guide for embedded system positions, covering Linux thread scheduling, single‑core CPU execution, STM32 chip specifications, SPI communication, DMA, synchronization primitives, priority inversion solutions, debugging techniques, and memory layout fundamentals.

Deepin Linux
Deepin Linux
Deepin Linux
Embedded Systems Interview Guide: Linux Thread Scheduling, STM32 Configuration, Synchronization, and Debugging

1. Self‑introduction (background, education, and embedded project experience)

Introduce your personal background, academic background, and emphasize any coursework or projects related to embedded systems.

2. How are threads scheduled in Linux?

The kernel selects the next thread to run based on thread priority and scheduling policy.

The kernel maintains a run‑queue containing all threads in the runnable state.

When a CPU core becomes idle, the kernel picks the highest‑priority thread from the run‑queue and assigns it to that core.

The selected thread enters the running state and executes until an event (time‑slice expiration, I/O wait, sleep, etc.) moves it to a blocked or sleeping state.

If a thread becomes blocked or sleeps, the kernel removes it from the run‑queue and selects another runnable thread.

When the blocking condition is satisfied, the kernel wakes the thread and places it back into the run‑queue for scheduling.

3. Execution order of multiple threads on a single‑core CPU

On a single‑core CPU, threads are scheduled using a time‑slice round‑robin algorithm; each thread receives a fixed time slice (typically a few milliseconds) and runs in turn.

This is pre‑emptive scheduling: the kernel can interrupt a running thread at any time to give CPU time to another waiting thread. Interruptions occur when the time slice expires, the thread blocks or sleeps, or a higher‑priority thread becomes ready.

4. Determining which thread runs first among many

First‑Come‑First‑Served (FCFS)

Round‑Robin (RR)

Priority Scheduling (static or dynamic)

Pre‑emptive Scheduling

Shortest Job Next (SJN)

5. Common STM32 chip configurations

STM32F103 series (e.g., STM32F103C8T6)

CPU frequency: typically 72 MHz

Flash: 64 KB or 128 KB

RAM: 20 KB or 48 KB

STM32F407 series (e.g., STM32F407VGT6)

CPU frequency: typically 168 MHz

Flash: 512 KB or 1 MB

RAM: 192 KB or 256 KB

STM32L432 series (e.g., STM32L432KC)

CPU frequency: typically 80 MHz

Flash: 256 KB or 512 KB

RAM: 64 KB or 128 KB

Refer to the specific datasheet for exact values.

6. Types of development performed on STM32

Embedded system development (smart home control, industrial automation, robotics, etc.)

IoT applications (connectivity, cloud integration, sensor data collection)

Peripheral control (LCD, touch screens, buttons, LEDs, etc.)

Data acquisition and processing (sensor interfacing, storage, transmission)

Smart vehicle / drone control (motor drivers, sensor fusion, navigation algorithms)

7. Drivers you have written

Display drivers (resolution, refresh rate, color handling)

Network interface drivers (Ethernet, etc.)

Audio drivers (playback and recording)

USB device drivers (printers, scanners, cameras, etc.)

Storage drivers (HDD, SSD, flash memory)

8. SPI communication basics

SPI uses four lines:

SCLK (Serial Clock) – generated by the master.

MOSI (Master Out Slave In) – data from master to slave.

MISO (Master In Slave Out) – data from slave to master.

SS (Slave Select) – selects the target slave.

Typical SPI clock rates range from a few hundred kHz to several tens of MHz, depending on hardware and application requirements.

9. What is DMA?

Direct Memory Access (DMA) transfers data between peripherals and memory without CPU intervention, freeing the CPU and improving performance. A DMA controller has its own address, count, and status registers and follows a typical flow: configure controller, request transfer, perform transfer, and signal completion via interrupt.

10. The four SPI modes

SPI modes are defined by CPOL and CPHA:

Mode 0: CPOL = 0, CPHA = 0

Mode 1: CPOL = 0, CPHA = 1

Mode 2: CPOL = 1, CPHA = 0

Mode 3: CPOL = 1, CPHA = 1

Choose the mode that matches the peripheral’s requirements.

11. Common development challenges

Error diagnosis (log analysis, debugger usage)

Performance optimization (profiling, algorithm improvement)

Concurrency issues (deadlocks, race conditions)

Third‑party library integration (version compatibility, configuration)

Cross‑platform compatibility (testing on different OS/hardware)

12. Large‑scale software development considerations

Involves requirement analysis, system design, module decomposition, implementation, testing, and deployment, with attention to code reuse, scalability, and maintainability. Design patterns and layered architecture help manage complexity.

13. Factors in middle‑layer design

Functional separation (clear responsibilities and interfaces)

Extensibility (easy addition of new features)

Loose coupling (minimal dependencies)

Security (authentication, authorization, data protection)

Performance (caching, async processing, concurrency)

Logging and monitoring

Testability (unit, integration, end‑to‑end tests)

Documentation (interface specs, workflow description)

14. Locks and synchronization in inter‑process communication

Common mechanisms include mutexes, semaphores, condition variables, barriers, and read‑write locks to ensure data consistency and safe concurrent access.

15. Process state when a lock cannot be acquired

Processes may block (sleep) waiting for the lock, or, if using a non‑blocking call, return an error immediately and continue execution.

16. Priority inversion scenario

If a low‑priority process holds a lock and a high‑priority process cannot acquire it, the high‑priority process is typically blocked until the lock is released.

17. Mid‑priority task pre‑empting the CPU

In pre‑emptive scheduling, a ready task with higher priority than the currently running task can pre‑empt the CPU.

18. Solving priority inversion

Techniques include priority inheritance, priority ceiling, and careful use of mutexes/semaphores to avoid long waits for high‑priority tasks.

19. Raising the priority of task A

Change the scheduling policy (e.g., to a real‑time policy).

Adjust the task’s priority attribute in the OS.

Cooperate with other tasks to voluntarily yield resources.

20. Determining how high to raise task A’s priority

The exact priority value depends on the operating system’s priority range and the specific real‑time requirements of the application.

21. User‑mode vs. kernel‑mode development on Linux

Linux consists of a kernel (kernel‑mode) that manages hardware and provides services, and user‑space programs (user‑mode) that run on top of those services.

22. Debugging user‑mode crashes (segmentation faults, etc.)

Use a debugger such as GDB to step through code and inspect the stack.

Insert logging statements to trace variable values and execution flow.

Employ memory analysis tools like Valgrind.

Run static analysis tools (e.g., Clang Static Analyzer).

Perform regression testing with reproducible inputs.

Seek help from community forums.

23. Debugging runtime errors

Log output at critical points.

Use assertions to validate assumptions.

Apply binary search (divide‑and‑conquer) debugging.

Utilize debuggers (GDB, LLDB) with breakpoints and backtraces.

Run memory checkers (Valgrind).

Analyze core dump files.

Use visual debugging tools (IDE integrated debuggers).

Handle exceptions and report diagnostic information.

24. Program memory layout sections

Code (Text) segment – read‑only executable instructions.

Data segment – initialized global/static variables.

BSS segment – zero‑initialized global/static variables.

Heap – dynamically allocated memory (malloc, new).

Stack – function call frames and local variables.

Other special sections (shared libraries, shared memory, etc.).

25. Differences between heap and stack

Allocation: heap uses explicit calls such as malloc() , new , and is freed with free() or delete ; stack allocation is automatic with function entry/exit.

Management: heap is manually managed; stack is managed by the compiler/runtime.

Size: heap is typically large; stack is limited.

Allocation method: heap is dynamic; stack is static (determined at compile time).

Access speed: stack access is faster due to locality and contiguous layout.

debuggingLinuxSynchronizationThread SchedulingEmbedded SystemsSTM32
Deepin Linux
Written by

Deepin Linux

Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.