Why Rust Still Frustrates Me: Compile Times, Borrow Checker & Async Pain
After five years of using Rust for backend services, CLI tools, and embedded systems, the author outlines persistent frustrations such as long compile times in large workspaces, cumbersome borrow‑checker constraints, awkward async ergonomics, excessive boilerplate from the orphan rule, and the impact of these issues on team productivity, while acknowledging Rust’s safety and performance benefits.
Background
The author has been programming in Rust for more than five years, delivering production‑grade backend services, command‑line tools, and embedded systems. Rust is chosen for its predictable performance and zero‑crash guarantees, but several recurring pain points have emerged over time.
Compile‑time Overhead
In a workspace of roughly 40 kLOC, even a single line change can trigger a full recompilation that takes 25–50 seconds on a high‑end machine. Incremental compilation and workspace tricks in Cargo help, but any modification to generic‑heavy modules or proc‑macro crates restarts the timer. Engineers often find their thought process interrupted while waiting for cargo check, and the feedback loop remains slower than in Go, Zig, or modern C++.
Borrow‑Checker Frustrations
Although the author fully understands Rust’s borrowing rules, certain patterns force painful refactoring. Self‑referential structs, cache views whose lifetimes outlive their data sources, and state machines that need to move ownership all trigger borrow‑checker errors. In async code this pain is amplified: everything must be 'static + Send or the programmer must manually manage Arc<...> or pinning futures. The author admits to using Rc<...> far more than desired, feeling like a silent surrender each time.
Async Ergonomics
Async/await works well, but its “contagion” never disappears. An async function forces surrounding code to become async, requiring clunky adapters to call synchronous code and vice‑versa. Sharing data across tasks demands tokio::sync types or Arc, and operations such as cancellation, select!, and stream handling still involve a lot of boilerplate compared to other languages. Moreover, the split between different runtimes (e.g., Tokio vs. others) adds friction when choosing a crate.
Boilerplate and the Orphan Rule
Rust’s standard library is intentionally minimal, so developers must pull in external crates for dates, regex, randomness, HTTP clients, etc. The orphan rule prevents implementing external traits for external types, forcing the use of newtype wrappers or Deref tricks. Consequently, code becomes longer with explicit .clone(), .into(), turbofish syntax, and verbose trait‑bound error messages.
Team Velocity and Onboarding Cost
Even experienced C++, Go, or Python engineers can spend days wrestling with lifetimes or trait bounds. New hires may get stuck for a week on a single borrow error that looks trivial once solved. In teams with mixed experience, these issues significantly slow feature delivery, as the compiler acts as a strict but unforgiving teacher, demanding real development time.
Conclusion
While Rust’s safety and performance deliver tangible benefits—zero data races, predictable speed, and fearless refactoring—the trade‑offs are longer compile times, ownership ceremony, and occasional moments of “why is this so hard?”. Mid‑ to senior‑level engineers evaluating Rust for new services should weigh these factors holistically. For most cases, the author still prefers Rust, having learned to adapt to its rough edges rather than ignore them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
