Why Dropping C for Rust Isn’t a Simple Security Fix
The article offers a comprehensive analysis of memory‑safety concerns, weighing the economic, technical, and practical factors that influence whether developers should replace C/C++ with safer languages like Rust, and explains why the decision is far more complex than a straightforward security upgrade.
Recently I saw an article that downplayed memory safety and argued that no changes were needed, only to see security experts respond that abandoning C and C++ is essential for safety and responsibility.
This piece is my detailed analysis of that topic, aiming to cover every angle so industry peers can understand my viewpoint.
TL;DR Version
Security problems are far more serious than many assume, and many should immediately reject C/C++ for new projects, not only for safety reasons.
The cost and risk of removing existing C code from applications are higher than most imagine; some critical software replacements take a decade or more to become mainstream, and the overall benefit of new software is not obvious.
Security is a hidden, complex field, so while “Rust is safer than C” may be true, the reality is not that simple.
Choosing a programming language looks simple, but the economics are very complex. Security is not the only non‑functional concern, and any system will contain some unsafe code as long as the underlying architecture is unsafe; trying to quickly eliminate C code brings many negative side effects.
System languages are over‑used; the C‑vs‑Rust dichotomy is a false choice because languages like Go often provide a better economic all‑round solution. Go offers sufficient performance for most use‑cases, can be safe, and accesses low‑level system APIs well.
Some security staff are already furious
I once saw a security engineer arguing with the business side, so I asked: “If you think security is paramount, why do you still use computers?”
People willingly accept risk daily—traveling can expose you to viruses, driving can cause accidents—but we tend to over‑ or underestimate our risk levels.
Historically, the security industry may have thought people underestimate risk, but today many industries underestimate it: network connections can be tampered with, code can be easily compromised, patches are missing, and isolation is weak.
Thanks to hard work in security, other tech fields now acknowledge the need for security—from hardware design to network protocols to language design.
If we remember this, the industry can progress faster and gain credibility.
How serious is memory safety?
Memory safety is often considered the most severe vulnerability class because exploits can gain highest system privileges and may be remote and unauthenticated.
However, the claim that memory‑unsafe bugs are abundant and easy for skilled attackers to find is wrong.
In the early 2000s that was true, but today the impact of unsafe code is still large, though not yet at a level that forces a switch to a safe language despite economic pressure.
Risk landscape has changed
I acknowledge that languages other than C/C++ are inherently safer, but I question exactly how much safer they are when existing security measures are considered.
Hardware and OS improvements now help block exploits without sacrificing much performance.
C++ focuses on its standard library to keep users from unsafe APIs, while C remains more conservative.
Full‑disclosure movements have increased scrutiny of C components, raising programmer awareness.
Academia no longer teaches C++, moving to Java then Python.
New system languages like Rust, Zig, Nim, etc., emphasize memory safety.
Cloud migration and modern stacks add abstraction but also increase attack surface; however, they also improve isolation, reducing impact.
Considering all this, many memory bugs in C/C++ are reported as exploitable, yet in practice many are not. Exploit development has become rarer and more costly.
Economic incentives skew risk perception: valuable bugs are rare, and many CVEs in C/C++ are not truly exploitable.
Sometimes C is used for valid reasons, especially in embedded systems where it remains the most practical choice.
Comparing CVE counts across languages can be misleading; for example, the Linux kernel now assigns CVEs to every bug, even non‑exploitable memory issues.
Exploitability has decreased
Finding good bugs is harder, and researchers are more diligent, so actual risk is lower, especially with good compensating controls.
While many C programs still have problems, proper design and paid code reviews raise the cost of finding the next bug.
Classic stack‑based buffer overflow examples illustrate how simple code can be vulnerable:
void open_tmp_file(char *filename) {
char full_path[PATH_MAX] = {0,};
strcpy(full_path, base_path);
strcat(full_path, filename);
// Do something with the file.
}Because C does not track lengths, strcpy and strcat can overflow if inputs exceed PATH_MAX‑len(base_path).
Stack frames mix runtime data with user data, making traditional stack overflows easy.
Attackers can overwrite return addresses with payloads (shellcode) to execute arbitrary code.
Why not always perform bounds checks?
Doing exhaustive checks would significantly impact performance and is infeasible for many domains.
Most software relies heavily on C/C++ for low‑level system code; operating systems and runtime libraries are written in these languages.
Rewriting everything in Rust is a long, arduous journey.
Rust can approach C speed because the compiler can prove many checks unnecessary.
Properly designed C APIs can avoid memory errors by tracking lengths and performing checks.
Economic factors usually dominate software decisions; performance, cost, and developer experience outweigh pure safety concerns.
History of mitigation for out‑of‑bounds errors
We must ask how much risk we are willing to accept today.
Modern mitigations like StackGuard (1998) randomize a canary value on the stack and abort on mismatch, making many classic overflows ineffective.
Attackers adapt, but dynamic allocation can bypass some mitigations.
Address Space Layout Randomization (ASLR) randomizes data locations at program start, drastically reducing exploit success probability.
Two important questions
Should systems keep program data away from internal state? Can we prevent code execution in heap/stack?
Per‑thread stacks are easier and faster; processes can address any memory within themselves.
Randomization and isolation help, but some environments still allow execution of injected code.
Return‑Oriented Programming (ROP) can still be used when other mitigations fail.
Intel’s Control‑Flow Integrity (CFI) attempts to block ROP by verifying call sites.
CFI does not stop attacks on languages like Python that run on a VM.
Overall, modern mitigations make many memory bugs hard to exploit, though not impossible.
Other memory errors
C/C++ suffer from use‑after‑free, double‑free, integer overflow, and manual memory management errors.
Garbage‑collected languages reduce many of these issues, though their own runtimes can have serious bugs.
Modern mitigation in C++
C++ standard library uses RAII to automatically free resources, reference‑counted wrappers to avoid raw pointers, and provides bounds‑checked containers.
C++ also has static analysis tools and optional garbage collection (Boehm).
However, C lacks many of these conveniences and remains more conservative.
Why are my views biased?
Statistics showing most CVEs stem from memory issues are true but do not reflect true risk; many bugs are not exploitable.
Rust’s CVE count is lower partly because less production Rust code exists.
Economic incentives drive researchers to focus on high‑value bugs, skewing perception.
Why don’t people switch to Rust?
Reasons include satisfaction with existing garbage collection, familiarity with current ecosystems, steep learning curve, talent shortage, perceived code complexity, long build times, and heavy external dependencies.
Rust’s ecosystem encourages many small dependencies, increasing supply‑chain attack surface.
Languages with richer standard libraries (Go, Python) reduce dependency risk.
Rust may have larger supply‑chain risk than C
C programs have few external dependencies, making supply‑chain attacks harder.
Rust projects often pull in many third‑party crates, complicating audit and increasing risk.
Standard libraries that include more functionality can reduce external dependencies, but most languages move away from that trend.
Recommendations
Consider overall economic impact when choosing technology; don’t default to system languages without analysis.
If you pick Go, Swift, Java, or C#, you may get better cost‑benefit ratios.
Always factor security into decisions, but recognize that choosing Rust does not eliminate cost.
Avoid unnecessary dependencies; fewer dependencies mean shorter builds, less testing, and lower risk.
When using C, document and justify the decision, and actively mitigate memory‑safety issues.
Security teams should listen to non‑security perspectives and avoid over‑simplifying risk.
Feedback
I welcome discussion and feedback on this topic, though I may not respond quickly.
Translator: Wang Qiang Reference: https://medium.com/@john_25313/c-isnt-a-hangover-rust-isn-t-a-hangover-cure-580c9b35b5ce
Related reading:
Microsoft donates $1M to Rust future
Building Web Apps with Rust
Microsoft: We won’t abandon C# for Rust
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
21CTO
21CTO (21CTO.com) offers developers community, training, and services, making it your go‑to learning and service platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
