search
Cloud Blog – DORA Metrics: Helping Firms Find Dev Productivity Insight
Gitlab

DORA Metrics: Helping Firms Find Dev Productivity Insight

Since DevOps presents a mixture of various industry best practices, toolkits, and, most importantly, cultures, the question arises: How do you measure their impact? Fortunately, there are several sets of CI/CD metrics that help IT and software development organizations have sufficient visibility and derive actionable insight.

One such set is called ‘DORA metrics,’ or DevOps Research and Assessment. Building on our rich GitLab consulting experience, we outline DORA’s foundational pillars, explore the parameters it needs as input, and provide tips on optimizing the delivery pipeline.

What Are DORA Metrics?

DORA metrics were designed by a dedicated Google Cloud team back in 2020, following six years of thorough research. The initial metrics included:

  • Deployment frequency and lead time for changes (both highlight team velocity).
  • Change failure rate and time to restore service (both highlight stability).

One year later, the team added a fifth indicator — reliability — in response to the ever-evolving nature of the DevOps space.

In 2022, the world entered into pandemic mode, which prompted the acceleration of digital transformation across advanced economies by 6 pp. on average, as per the International Monetary Fund. As time passed, the metrics started to show their first results. In 2023, a Big Three management consultancy, McKinsey, came up with the development productivity assessment approach that had DORA and other DevOps metrics (SPACE) under the hood. For 20 firms in Tech, Finance, and Pharma, the yield was more than noticeable:

  • 20% to 30% decrease in defect issues raised by the customers.
  • 20% enhancement in employee experience.
  • 60 pp. boost in customer satisfaction.

Today, we look at the DORA metrics through the lens of GitLab, a one-stop shop for all things DevSecOps. As of March 2024, the platform was recognized as one of the best in its category by a reputable software review site, G2.

Deployment Frequency

The deployment frequency metric shows how often your teams push code live. You can measure it by an hour, a day, a week, a month, or a year, depending on how big a picture you want to see. This performance monitoring metric is vital as it allows you to gauge how well you’re fit to meet new customer demands and take advantage of new market opportunities. The higher the frequency, the faster the feedback loop and the iterations.

GitLab tracks “finished” deployments, meaning those completed successfully (no errors). It focuses on an average number of deployments made to production environments, identified by a designated “production tier” or names like “production” or “prod.” Only deployments reaching these contribute to the deployment frequency metric.

An interesting question here is how you define ‘success’ internally. Does it apply to deployments that cover just 10% of your traffic or those that target it all?

Lead Time for Changes

This indicator demonstrates how long it takes for a change in code (or commit) to get into production. It is also used to continuously check the health of your CI/CD pipelines; if it reduces over time, that means that your team has become more productive and delivers necessary modifications more quickly.

GitLab tracks the seconds between two key moments: when someone approves a request merging the code into the main branch and when your code runs flawlessly in production, signifying a successful deployment. One key thing to remember is that lead time for changes doesn’t account for the time it takes to write the code itself.

By default, GitLab monitoring is available for lead time for changes in deployments that involve a single branch with multiple stages (like going from development to staging and then to production). If you have a more complex workflow with separate merge requests for each stage, GitLab will view them as individual deployments, potentially inflating this metric.

Change Failure Rate

This metric lets you understand how often changes you release lead to failures in production, enabling you to investigate the quality of your code. The greater the rate, the more likely it is that you need to revise your deployment processes or extend the coverage of automation testing.

To obtain the change failure rate, GitLab divides the number of incidents that happen after each deployment by the total number of deployments made to production. Say you deployed a new code 20 times in a month and suffered incidents once every week. In this scenario, your change failure rate would stand at 20% (4 incidents / 20 deployments).

Time to Restore Service

Quite self-explanatory, this indicator lets you assess how quickly you can remedy failures in production. In case you’re running late, it would be a good idea to review your business contingency and disaster recovery plans.

GitLab keeps track of how long incidents affecting production environments last (in seconds). In plain English, each incident can only be caused by a single deployment to production, and conversely, no deployment in production should trigger more than one incident.

Not using GitLab yet? Give a self-managed license a test run for 30 days.
No payment required.
Claim offer →

How to Improve Your Performance on DORA Metrics: A Complete 2024 Checklist

At this point, everything is about optimization. Following GitLab’s recommendations, in this section, we provide a list of actions you can take in order to get the most out of your delivery pipeline via DORA metrics.

Accelerate Delivery Flow

  • Compare deployment frequency across teams and projects and look out for opportunities to increase reliability.
  • Break down large features into smaller, more manageable units to reduce lead time for changes.
  • Implement automated testing and code validation to enhance deployment quality and identify issues before they escalate into financial losses.
  • Analyze the efficiency of CI/CD pipelines across teams and projects (find out more about why CI/CD is important). Turn the spotlight on bottlenecks and optimize for faster deployments.
  • Leverage GitLab Value Stream Analytics to eliminate inefficiencies and improve overall workflow smoothness.

Enhance Stability and Recovery

  • Compare team response times to service disruptions and locate areas for improved communication and collaboration.
  • Increase visibility into production environments to expedite problem detection and troubleshooting (reducing MTTR).
  • Establish well-defined workflows for responding to outages and interruptions to speed up resolution times.

Balance Speed and Quality

  • Compare code quality and stability across teams and projects. Aim for a healthy balance between deployment frequency and lead time for changes without compromising quality.
  • Evaluate and improve the code review process for effectiveness without hindering deployment speed.

About Us

Cloudfresh is a GitLab Partner (Select and Professional Services). We can assist you in areas of implementation, migration, education, integration, advice, and support.

If you’d like to get a value stream analysis free of charge, we invite you to fill out a short form below. With the Cloudfresh-conducted VSA, you gain:

  • A 360-degree view of the current state of things.
  • A complete understanding of business goals and how to achieve them.
  • An identification of every constraint holding you back.
  • A full-fledged Value Stream map.
  • A cost-benefit analysis ready to act upon (if applicable).

Besides, we can help you set up necessary benchmarks to properly track and derive meaningful insight from DORA metrics and broader GitLab analytics.

Get in touch with Сloudfresh