Code Review Metrics – The Miro Way

Code Review Metrics – The Miro Way

We are excited to announce our upcoming webinar, Code Review Metrics – The Miro Way! Join us to discover how to spot the good and avoid the bad in code review signals.

💡 Hosted by Anita Zbieg, PhD, this session features industry experts Egor Siniaev (Head of Engineering at Miro, Core Experience) and Jon Kern (Agile Manifesto Co-author, Software Engineer at Adaptavist). They’ll dive into effective strategies for measuring and optimizing your code review processes.

👉 Key Topics:
◾ Measuring the Code Review Process – The Good, the Bad, and the Ugly
◾ How to Build the Right Code Review Metrics
◾ How to Align Teams around Code Review Metrics

📅 Date: 27 March 2025, 10:00 EDT | 16:00 CET

📍 Where:
#1 Streaming live on LinkedIn
#2 Zoom: https://lnkd.in/dQHfetki

Code review is key to writing better code, catching mistakes early, and fostering team collaboration—but how do you know if your review process is truly working?

We’ll explore how to measure code reviews the right way—identifying what drives code quality and workflow speed while avoiding the pitfalls of misleading or misused metrics.

We’ll dive into what makes a strong code review process, what to avoid, and how to ensure teams use metrics as valuable insights—rather than “zombie dashboards” no one trusts or numbers that get gamed.

Measuring the Code Review Process – The Good, the Bad, and the Ugly

Code reviews help teams write better code, catch mistakes early, and share knowledge. But how do you know if your review process is working well? Measuring it can reveal delays, improve teamwork, and speed up delivery. When done right, tracking reviews leads to faster feedback and better quality.

But not all measurements help. Some teams rush reviews to hit targets, missing critical issues. Others get stuck in slow approvals, frustrating developers. And sometimes, numbers get gamed—quick approvals with no real checks or artificially small PRs to boost stats.

Google is an example of a company that defines a good review process. They focus on three key areas: Speed – How long does it take to complete a review?, Ease – How difficult is it for developers to navigate the process?, and Quality – Is the feedback useful and actionable?

The bad may happen when teams optimize for the wrong metrics—for example, focusing only on speed while ignoring defects. And the ugly? That’s when developers manipulate metrics—rushing reviews, cherry-picking easy PRs, or leaving meaningless comments just to inflate numbers.

How to Build the Right Code Review Metrics? 

What you measure shapes what you optimize. When designing code review metrics, every decision influences how teams work.

At the first level, start with basic but key questions: What does this metric actually tell us? For example, Code Review Turnaround Time measures how long it takes to get feedback on a PR. But is the data clean and reliable—can we trust it? And most importantly, is the insight actionable? If a faster turnaround means a more responsive review system, what concrete steps can we take to improve it?

The next layer is about defining good vs. bad and finding a good enough balance without overcommitting to a single metric at the cost of others. Speed, ease, and quality in code reviews often pull in different directions—focusing too much on speed might reduce review depth, while prioritizing quality might slow things down. Should we balance these trade-offs, or is it better to optimize one over the others? Similarly, when looking at leading and lagging indicators—like measuring speed alongside defect rates—should we prioritize one, or aim for a more complete picture?

With both passive (engineering logs) and active (engineering surveys) data, how do you best combine them to answer these questions?

How to Align Teams around Code Review Metrics?

Metrics alone don’t drive change—teams do. A dashboard full of numbers that no one trusts or acts on is just noise. The real challenge isn’t just measuring code reviews but making the data meaningful and actionable. 

We measure as part of the measure-act-learn cycle—but measurement alone isn’t enough. The real impact comes when teams use insights to take action, learn from the results, and continuously improve.

To do that, teams need to trust the data, align around shared goals, and see value in the insights. If engineers don’t believe the numbers reflect reality, they won’t use them. But when metrics help teams compare trends, identify best practices, and learn from each other, they become more than just reports—they become tools for continuous improvement. 

So how do we make that happen? What needs to be simplified? What should be amplified?

March 13, 2025

Want to explore more?

See our tools in action

Developer Experience Surveys

Simple, holistic, active observability of software delivery to detect and drive more strengths & fewer frictions

WorkSmart AI

Advanced, real-time, passive observability of work dynamics to detect and drivebetter time spent & collaboration
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Schedule a demo →