Key Findings and Recommendations from the 2021 DORA DevOps Report (Chapters 1‑3)
The 2021 DORA DevOps Report, based on a seven‑year study of over 32,000 professionals, reveals how elite software delivery and technical‑operations practices—such as reliability goals, secure supply‑chain integration, high‑quality documentation, cloud adoption, and positive team culture—drive superior organizational performance and provides data‑driven guidance for improvement.
Excerpt from the DORA DevOps Report 2021, Chapters 1‑3
Chapter 1 Survey Overview
The Google Cloud DORA team released a DevOps acceleration state report representing seven years of research and data from more than 32,000 professionals worldwide.
Our research examined the capabilities and practices that drive software delivery, technical operations, and organizational performance. Using rigorous statistical techniques, we identified data‑driven insights into the most effective and efficient methods for developing and delivering technology.
We found that excellent software delivery and technical‑operations performance propel organizational performance in technology transformation. To enable teams to benchmark themselves against the industry, we used cluster analysis to form meaningful performance categories (low, medium, high, elite).
When teams understand their relative performance, they can use our predictive analytics to target practices and capabilities, improve key outcomes, and ultimately improve their relative position.
1. This Year’s Focus
We emphasized meeting reliability objectives, integrating security across the entire software supply chain, creating high‑quality internal documentation, and fully leveraging cloud potential.
We also explored whether a positive team culture can mitigate the impact of remote work caused by the COVID‑19 pandemic.
To make meaningful improvements, teams must adopt a continuous‑improvement mindset, use baselines to measure their current state, identify constraints based on the capabilities mentioned in the survey, and act on lessons learned.
2. Key Findings
1. The best‑performing companies are growing and continuing to raise the benchmark. Elite‑performance teams now represent 26% of all teams, and their change‑lead time has shortened.
2. SRE and DevOps are complementary philosophies. Teams that adopt modern SRE practices report higher technical‑operations performance, and those that prioritize delivery and operational excellence achieve the highest organizational performance.
3. More teams are leveraging cloud technologies and seeing large benefits. Teams that use all five cloud capabilities experience higher software delivery‑and‑operations (SDO) performance and organizational performance; multi‑cloud adoption is increasing.
4. Secure software supply chains are both critical and performance‑driving. Integrating security throughout the supply chain enables rapid, reliable, and safe software delivery.
5. Good documentation underpins successful DevOps capability. High‑quality internal documentation improves the ability to implement technical practices and overall performance.
6. Positive team culture can alleviate burnout in challenging environments. Inclusive, vibrant teams reported less burnout during the pandemic.
Vibrant team culture refers to highly collaborative teams that break silos, treat failures as learning opportunities, and share decision‑risk.Chapter 2 How We Compare
We surveyed how teams develop, deliver, and operate software and grouped respondents into four performance clusters: elite, high, medium, and low. Comparing a team’s performance to each cluster reveals its relative standing.
1. Software Delivery and Technical‑Operations Performance
Organizations must deliver and operate software quickly and reliably to meet changing industry demands. Faster change lead time enables faster value delivery, experimentation, and feedback.
Over seven years of data collection, we have validated four metrics that measure software delivery performance and added a fifth metric in 2018 to capture operational capability.
We call these five metrics "Software Delivery and Technical‑Operations (SDO) performance". They focus on system‑level outcomes, avoiding common pitfalls of component‑level metrics.
1. The Four Delivery Performance Metrics
Throughput is measured by lead time for changes (time from code commit to production) and deployment frequency . Stability is measured by mean time to restore service and change failure rate .
Cluster analysis of the four delivery metrics reveals four distinct performance profiles (elite, high, medium, low) with statistically significant differences in throughput and stability.
2. The Fifth Metric: From Availability to Reliability
The fifth metric captures technical‑operations performance, focusing on reliability—how well teams meet their reliability commitments.
Teams that prioritize reliability see better outcomes across all delivery performance categories.
3. Industry Is Accelerating Continuously
Each year the industry improves its ability to deliver software faster and more stably. Elite teams now account for two‑thirds of respondents, and their lead time has dropped from less than a day in 2019 to less than an hour in 2021.
4. Throughput
A) Deployment Frequency – Elite teams deploy multiple times per day, while low‑performing teams may deploy less than once every six months.
Normalized annual deployment counts range from 1,460 per year (4 per day) to 1.5 per year; elite teams deploy roughly 973 times more frequently than low‑performing teams.
B) Lead Time for Changes – Elite teams achieve lead times of under one hour, compared with over six months for low‑performing teams.
5. Stability
A) Service Restoration Time – Elite teams restore services in under an hour, while low‑performing teams may take up to six months.
B) Change Failure Rate – Elite teams experience a 0‑15% failure rate (average ~7.5%), whereas low‑performing teams see 16‑30% (average ~23%).
Chapter 3 How to Improve
How can SDO and organizational performance be improved? This research provides evidence‑based guidance on capabilities that drive performance.
This year’s report examines the impact of cloud, SRE practices, security, technical practices, and culture.
1. Cloud
Consistent with the 2019 acceleration state, more organizations are adopting multi‑cloud and hybrid‑cloud solutions. 56% of respondents now use public cloud (including multiple providers), a 5% increase from 2019. 21% use multiple public clouds, 34% use hybrid cloud, and 29% use private cloud.
1. Adoption
Leverage hybrid and multi‑cloud to accelerate business outcomes
Users of hybrid or multi‑cloud are 1.6 × more likely to exceed organizational performance goals and 1.4 × more likely to excel in SDO metrics.
Why use multi‑cloud?
26% adopt multiple providers to exploit each provider’s unique advantages; 22% cite availability, and multi‑cloud users are 1.5 × more likely to meet or exceed reliability goals.
2. Changes in the Baseline
A) How cloud infrastructure is used matters. Rather than merely using cloud services, the way teams implement cloud capabilities (as defined by NIST) drives performance. Elite teams are 3.5 × more likely to exhibit all five NIST cloud characteristics.
Only 32% of respondents agree they meet all five NIST characteristics, a 3% increase from 2019; overall adoption of NIST features rose 14‑19%.
B) On‑demand self‑service – 73% of respondents use self‑service, up 16% from 2019.
C) Broad network access – 74% report using this capability, up 14%.
D) Resource pooling – 73% report using this capability, up 15%.
E) Rapid elasticity – 77% report using this capability, up 18%.
F) Measured service – 78% report using this capability, up 16%.
Follow this public account and reply “Report” to receive the full Chinese download link.
Continuous Delivery 2.0
Tech and case studies on organizational management, team management, and engineering efficiency
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.