Analysis of Common C/C++ Memory Issues: Leaks and Performance Degradation
This article examines typical C/C++ memory problems, classifying them into memory leaks and performance issues caused by fragmentation, explains root causes such as lost pointers, improper releases, unbounded allocations, and provides diagnostic examples and mitigation strategies using memory pools like tcmalloc.
Based on the earlier work "C/C++ Program Core Dump Analysis", this piece categorizes program memory problems into two main phenomena: memory leaks and performance degradation, aiming to help readers understand their types and fundamental concepts.
Memory leaks manifest as continuously growing memory usage without release and arise from either unreasonable program design (e.g., lost pointers, unreleased allocations, infinite allocation loops, data structures that never free) or excessive memory fragmentation caused by poor usage patterns.
Examples include pointer loss when a framework memset resets a structure containing pointers, failure to release memory in exception paths, non‑virtual destructors preventing base‑class cleanup, and mixing custom memory pools with regular allocations that leave orphaned blocks.
Excessive fragmentation, illustrated with tcmalloc’s free‑list and large‑list mechanisms, makes memory reclamation difficult and can lead to hidden leaks when large spans remain in the large list.
Performance issues stem from the same fragmentation: frequent malloc/free calls trigger OS page faults, increasing CPU time. The article reviews glibc’s allocation strategies (brk/sbrk for small requests, mmap for large ones) and explains how minor and major faults affect performance.
Case studies show that allocating 2 MiB per thread and releasing via mmap can generate thousands of page faults per second, dramatically raising CPU usage, while using a memory pool (tcmalloc, jemalloc) mitigates this overhead.
Allocation failures can also occur when the number of memory mappings exceeds the kernel’s max_map_count , as demonstrated by a 74 GiB process hitting the limit due to massive fragmentation.
The article concludes that memory issues are harder to diagnose than core dumps, requiring OS‑level knowledge and tooling; employing mature memory pools and careful design remains the primary recommendation.
Baidu Intelligent Testing
Welcome to follow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.