Resources
Judging CI Success
In a recent white paper entitled Creating Reliable Software, we made the case for a new class of solutions to help keep CI pipelines clear and software engineering teams to be successful. We called this new class Software Flight Recording Technology.
But what does success look like?
Every organization judges success differently. To some, finding a single, hard-to-reproduce bug per month is enough to deem changes to their CI pipeline as effective. Others consider the reduction in the amount of aggregate developer hours spent finding and fixing software defects per quarter as their key performance indicator. Speed to delivery, decrease in backlog, and product reliability are also common metrics tracked.
Whatever the success criteria, it should reflect the overarching goals of the larger software engineering team, or even corporate objectives. To ensure that teams measure and monitor the success criteria that matters most to them, software engineering managers and team leads should establish their own KPIs.
Some questions to consider when developing CI success metrics:
- Is code shipped earlier than previous deployments?
- How many defects are currently in the backlog compared to last week/month?
- Are developers spending less time debugging?
- Are other teams waiting for updates?
- How many developer hours does it take to find and fix a single bug?
- How long does it take to reproduce a failure?
- How long does it take to fix a failure once found?
- What is the average cost to the organization of each failure?
These questions are designed as an initial starting point. As mentioned earlier, each organization is different and places value on certain aspects of CI depending on team dynamics and needs. What’s important is to establish a baseline to ensure agreement and commitment across teams, and to benchmark progress.
To learn more, please download Creating Reliable Software!