DORA and Agile – a bake off of delivery pipeline measurement techniques

Today’s post is inspired by Matt Shuster, who asked about my opinion on DORA vs Agile pipelines. So, let’s see the basics first; measuring the performance of an agile delivery pipeline requires a combination of metrics that focus on both efficiency and effectiveness. Here are a few metrics that are commonly used for this purpose:

  • Lead time: The time from when a feature is requested to when it is delivered to the customer.
  • Cycle time: The time it takes to complete a specific task or feature from start to finish.
  • Throughput: The number of features delivered per unit of time.
  • Defect density: The number of defects per unit of delivered code.
  • Deployment frequency: The frequency of code releases to production.
  • Time to restore service: The time it takes to restore service after a production failure.
  • User satisfaction: Feedback from users on the quality and functionality of the delivered features.

As with all metrics, these metrics should be regularly monitored and used to continuously improve the delivery pipeline by identifying bottlenecks, optimizing workflows, and reducing waste. Additionally, as Agile is not one-size-fits-all, it’s important to regularly reassess and adjust the metrics used to ensure they accurately reflect the goals and priorities of the organization.

On the other hand, let’s quickly look at DORA. The DORA (Accelerate) framework is a set of four metrics that provide a comprehensive view of the performance of an organization’s software delivery process. The four metrics are:

  • Lead time: The time it takes to go from code committed to code successfully running in production.
  • Deployment frequency: The number of times per day that code is successfully deployed to production.
  • Mean time to recovery: The average time it takes to restore service after an incident.
  • Change failure rate: The percentage of changes that result in a production failure.

These metrics align well with the metrics commonly used to measure the performance of an agile delivery pipeline and can be used in a complementary manner to validate the software architecture. For example, a low lead time and high deployment frequency indicate that the delivery pipeline is efficient and streamlined, while a low change failure rate and mean time to recovery indicate that the architecture is robust and reliable.

I promised a bake off, so, here we are 🙂 The comparison between using metrics to validate a software architecture and using the DORA framework is that both provide different but complementary perspectives on the performance of an organization’s software delivery process.

On one hand, metrics such as lead time, cycle time, throughput, and defect density focus on efficiency and effectiveness of the delivery pipeline. They help to measure the time taken to complete a task, the speed at which features are delivered, and the quality of the delivered code. These metrics provide insight into the processes and workflows used in the delivery pipeline and help identify areas for improvement.

On the other hand, the DORA framework provides a comprehensive view of the performance of an organization’s software delivery process by focusing on four key metrics: lead time, deployment frequency, mean time to recovery, and change failure rate. These metrics help to measure the speed and reliability of the delivery pipeline and provide insight into the resilience and stability of the software architecture.

So, which of them to use? By using both sets of metrics together, organizations can get a complete picture of their delivery pipeline performance and identify areas for improvement in both architecture and processes. This can help ensure that the architecture supports the needs of the organization and the goals of the delivery pipeline, while also providing a way to continually assess and optimize performance over time. For example, metrics such as lead time and cycle time can highlight bottlenecks and inefficiencies in the delivery pipeline, while metrics such as change failure rate and mean time to recovery can highlight weaknesses in the architecture that may be contributing to production failures.

In summary, using metrics to validate a software architecture and using the DORA framework together provides a comprehensive view of the performance of an organization’s software delivery process and helps to identify areas for improvement in both architecture and processes. As probably you figured out, I like case studies and tools, so… here we are 🙂

  • Netflix: Netflix uses a combination of metrics, including lead time and cycle time, to measure the performance of its delivery pipeline. They use this data to continuously optimize their processes and improve their architecture, resulting in a highly efficient and effective delivery pipeline.
  • Amazon: Amazon uses a combination of metrics, including deployment frequency and mean time to recovery, to measure the performance of its delivery pipeline. By regularly monitoring these metrics, Amazon has been able to achieve a high level of reliability and stability in its software architecture, allowing them to quickly and effectively respond to incidents and restore service.
  • Spotify: Spotify uses a combination of metrics, including lead time and throughput, to measure the performance of its delivery pipeline. By using these metrics to continuously optimize their processes and improve their architecture, Spotify has been able to increase the speed and efficiency of its delivery pipeline, allowing them to deliver high-quality features to users faster.
  • Google: Google uses a combination of metrics, including lead time, deployment frequency, and mean time to recovery, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Google has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
  • Microsoft: Microsoft uses a combination of metrics, including lead time and cycle time, to measure the performance of its delivery pipeline. By using these metrics to continuously optimize its processes and improve its architecture, Microsoft has been able to increase the speed and efficiency of its delivery pipeline, allowing it to deliver high-quality features and updates to users faster.
  • Shopify: Shopify uses a combination of metrics, including deployment frequency, mean time to recovery, and change failure rate, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Shopify has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
  • Airbnb: Airbnb uses a combination of metrics, including lead time, deployment frequency, and mean time to recovery, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Airbnb has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.

These case studies demonstrate the importance of regularly measuring and analyzing performance metrics to validate a software architecture and improve the delivery pipeline. By using a combination of metrics and regularly reassessing and adjusting their approach, organizations can continuously improve their delivery pipeline and ensure that their architecture supports the needs of the organization and the goals of the delivery pipeline. And speaking of tools – there are various tools and software that can be used to measure the DORA framework measures. Some popular options include:

  • Datadog: Datadog provides real-time monitoring and analytics for cloud-scale infrastructure, applications, and logs. It can be used to track key performance indicators, including lead time, deployment frequency, mean time to recovery, and change failure rate, and generate reports and alerts based on that data.
  • New Relic: New Relic is a performance management platform that provides real-time visibility into application performance. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Splunk: Splunk is a software platform for searching, analyzing, and visualizing machine-generated big data. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • AppDynamics: AppDynamics is an application performance management solution that provides real-time visibility into the performance of applications and infrastructure. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Prometheus: Prometheus is an open-source systems monitoring and alerting toolkit. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • InfluxDB: InfluxDB is an open-source time series database. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Grafana: Grafana is an open-source data visualization and analysis platform. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Nagios: Nagios is an open-source IT infrastructure monitoring solution. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • JIRA: JIRA is a project and issue tracking software. It can be used to track the lead time and cycle time of the delivery pipeline by monitoring the time it takes for work items to move through the various stages of the development process.

These are just a few examples of software tools that can be used to measure and track DORA framework metrics. The specific tool or combination of tools used will depend on the needs of the organization and the size and complexity of the delivery pipeline. For agile, we have our set of tools too, I picked only some of them (and yes, JIRA is on both lists):

  • Trello: Trello is a visual project management tool that can be used to track and visualize the progress of work items through the different stages of the development process.
  • Asana: Asana is a team collaboration tool that can be used to track and visualize the progress of work items through the different stages of the development process.
  • JIRA: JIRA is a project and issue tracking software that can be used to track and visualize the progress of work items through the different stages of the development process.
  • Clubhouse: Clubhouse is a project management tool specifically designed for agile teams. It can be used to track the progress of work items through the different stages of the development process and visualize the flow of work through the delivery pipeline.
  • Pivotal Tracker: Pivotal Tracker is an agile project management tool that can be used to track and visualize the progress of work items through the different stages of the development process.

I hope this helped answering the DORA vs Agile metrics question, with the answer being:

Leave a Reply

Your email address will not be published. Required fields are marked *