I had a fantastic time hosting our session on spotting the good and avoiding the bad in code review signals with two incredible guests: Egor Siniaev, Head of Engineering at Miro, responsible for some part of core experience, and Jon Kern, Co-author of the Agile Manifesto from Adaptavist. Both are passionate about building engineering culture with exceptionally high openness for feedback and improvements.
You can find the full conversation in the webinar recording—but if you’re short on time, here are the highlights I found most valuable.
Check out the 7-minute recap video!
Code reviews help teams write better code, catch issues early, and work better together. Measuring them can reveal delays, improve feedback, and speed up delivery — but only if you track the right things.
Bad metrics lead to rushed reviews, gamed numbers, or slow approvals that frustrate developers. For example, focusing only on speed can sacrifice quality. Google’s approach balances speed, ease, and quality to ensure useful feedback, not just fast approvals.
Metrics alone don’t drive change. Teams do. Dashboards full of untrusted numbers are just noise. Impact comes when teams measure, act, and learn — using data they believe in to improve how they work.
So how do we get there? What should we simplify? What’s worth amplifying?
I found a lot of people around me who were complaining about Miro’s code review processes and practices—and honestly, I was doing the same. But then one really smart Miro guy—now a good friend—asked me, or actually challenged me: “Why are you also complaining?” That question really pushed me. So together, we organized a working group with some of the folks who were also voicing similar frustrations. I just reached out to them on Slack and said, “Hey, do you want to actually do something about this?” That’s how it all started. – Egor
We have the goal on the company level of becoming an industry-leading technology platform, enabling faster value delivery. We also have a no-bullshit culture. It’s very important to prove it. – Egor
If you build an environment with a no-bullshit culture, then people should be able to speak openly about things. – Jon
Any problem, small or big, needs to be decomposed and split into small chunks in order to be solved—and then go step by step. Notice the problem and formulate it. Collect all what you observe. Describe what you want to achieve. Execute. Reflect and iterate. – Egor
Notice the problem. This is maybe a more product-oriented approach we’re taking here at Miro. If you understand the problem really well and can actually formulate it—clearly define that it’s present—then you can start collecting everything around you. Like, gather observations, find proof that the problem exists for you and the people around you. – Egor
Describe what you want to achieve. Because just knowing the problem isn’t enough. You need to understand the goal—you need to see the future, the vision. – Egor
Execute—try something, maybe it works, maybe it fails. But the important thing is to reflect and iterate. You do something, observe the change—is it working or not? If not, repeat the process and try again. – Egor
Bad and good code review metrics—this is very specific to the org. In general, most teams want to have an efficient process that speeds up development while keeping the necessary level of quality and knowledge sharing. The bad things—people will tell you, or you’ll uncover them through surveys. They’ll say things like, “I don’t like this because it’s slowing me down,” or “Why am I doing this?” They’ll point out all the negative effects the team wants to eliminate. You’ll also see those bad things reflected as negative effects in some areas of your org. – Egor
You don’t design the metrics—you design the effects you want to see in the future state of your “system,” in how your teams work, or in how your company operates. You need to visualize that future. – Egor
Name the problem by asking: Do you see any negative effects around you? What are they? Can you influence them today? This is important because sometimes you see the problem, and you really can’t do anything with this. – Egor
Because we’re talking about the new ways of working, we set a clear goal for us—that we want to improve the internal code review process for all our engineers by sharing best practices, defining rules, and optimizing the tools. When we did the research, we found a lot of different problems in these directions. And we started collecting the context—why we think that this goal makes sense. – Egor
We found that different engineers were doing reviews very differently, had different expectations, and gave unstable feedback. That led to unstable quality—code reviews took longer or had some negative effects. There was also a group of engineers who weren’t using the tools and automations 100%. So, someone would create a PR and immediately ping people on Slack—even though we already had automated tools for notifications. All these things just weren’t working. – Egor
We went together with the working group through many different contexts—Slack discussions, meetings, and a lot of interviews on team-specific processes and hacky patches. We provided all this context to explain why we set our goal. – Egor
I’m always trying to migrate toward using the delivery of value in the client’s hands—whatever that means—as the primary metric. It’s a very powerful way to look at things. When it comes to code review, I ask: Why are we doing this? It’s about quality. Why are we doing this? So we can deliver faster. Why are we doing that? So we can provide better value to the user. I’m always trying to bring it back to that. – Jon
What will give you insights about what to measure and what to do? Talk with people, run surveys, do the interviews—prove that the problem exists. People will give you a clear answer. But you need to be smart enough to understand what’s the core problem. – Egor
It’s a challenge to figure out which metrics work and which ones don’t. People can game the system depending on the metrics you’re trying to collect, and that can lead to the wrong behavior. – Jon
Be mindful about the questions that you’re asking. Don’t just ask to confirm that the problem exists—ask questions that will give people food for thinking, like, “Okay, do we really have this problem? Why?” The questions you ask will be unblockers for the next improvements. – Egor
The size of the audience you are listening to is key. Don’t rely on a small chunk of people. Because if you run a survey—or do other testing—without a wide-spread across different functions, streams, or departments, you’ll end up with a wrong representation of the problem. – Egor
Collect the data. But the data is very dirty, so you need to be very mindful about it. What you see is still questionable. You need to clean it up—remove all the noise and garbage. What you collect will give you good signals for how it is now, and what kind of next metrics you could measure. – Egor
Be careful what you ask for—you might get it. Be careful what you measure—you might get that too.
It really comes down to having the smart people in the room understand the value we’re trying to deliver, and share that knowledge throughout the organization. Everything should have a purpose. People should be able to use their brains to make judgments—to do the smallest thing that has the greatest impact—rather than just being order takers, automatons, or AI bots. – Jon
We aggregated the problems known from survey results, like: slow review times, bog PRs, pinging reviewers, bad/no PR description, too many teams per PR, and many others. Our working group went through all the challenges and marked them according to the areas: time, communication, automation, standards, ownership, size. – Egor
When the group goes through the challenges, your brains immediately start generating solutions. We came up with some rough ideas of what we could do and started prioritizing them—what’s easy to do, what’s difficult to do, and what could be impactful. That helped us to prioritize our efforts. – Egor
Try some things, and talk to the folks who really know: What isn’t working? What is working? Elevate the discussion and bring in a bit of holistic systems thinking: Why are we doing code review in the first place? Why are we frustrated with it now? You need to better understand the context—what problem you’re trying to solve, whether it’s a big idea or a small one. – Jon
I usually prefer the word signals, not metrics—because when you say, “Yeah, we have this metric,” people start thinking, “Okay, if we have this metric, I’ll optimize for it.” You can’t optimize for signals. So we’re using the signals as insights for us. – Egor
The goals—that’s not an easy task to formulate. Start by describing the end state and writing down how you see your system, your process, your team in the future. This could be six months, one year, or two years ahead. That will help. – Egor
Describe what good means, but also what bad means as the future state. This is the end state that you wish for—but also describe what you would treat as a state of failure. Failure for a team, for a project. We call it at Miro - a pre-mortem review. Imagine this. – Egor
Set ambitious, but clear and simple goals. Aim high and use numbers—because numbers are something people can understand. For example: “We want to increase by 10%.” From what exactly? Be clear: right now we have this number, and we want to end with that number. That’s how you’ll measure success. – Egor
Make goals public and get people’s review and feedback. When a goal is publicly reviewed, you’ll collect interesting and critical input. And again—no bullshit culture. We’re still following this. It’s very important, because if you aim high and people say, “Oh, this is bullshit, it won’t work,” you need to know that from the beginning. – Egor
Publicly reviewed goals are similar to what I often talk about—aligning the thought bubbles above our heads. We might be saying the same things about code review or anything else, but I’m not convinced everyone is actually seeing the same thing. – Jon
Public goals and reviews will help you get commitment from the people, because they see that the problem is valid and the context is clear. If they understand the goal, you can say, “Let’s do it together.” – Egor
We created a Confluence page where people could see all the info—what’s happening, what’s in progress, and all the artifacts we collected. If we noticed someone complaining about something or making suggestions, we’d say, “Okay, come to our working group. Let’s work together. Stop complaining—we’ll use your energy.” – Egor
Increase the awareness. Thread this information in many different directions—it could be through engineering and leadership meetings, Slack messages, and so on. That will create a lot of stress on you, but you’ll get valuable feedback and insights, and it will stimulate you to move forward. – Egor
I’m always trying to build a kind of shared pool of knowledge with teams. Why? Because first, it’s what we pay attention to—only then can we intentionally and collectively try to do something about it. It’s hard to change things that are implicit, but when we make them explicit and understood by everyone in a similar way, we can align. It takes effort to build a shared mindset and alignment. – Jon
Here is an example of a signals dashboard we created. Unfortunately, I can’t share any exact numbers because of our NDA, but I think I can show the general structure. You can also see the live dynamics, and it’s kind of fresh. You can see what’s happening. – Egor
As a data source, choose your code management system. In our case, we had two data sources at the same time—Bitbucket and GitHub—which created additional pain points for us. We also used JIRA to see some relative signals and data points. We did some “magic” to do the data mapping, which really helped. Thanks to our data analyst who supported us. And all this was represented in Google Looker to bring it together visually. – Egor
Choose the top signals you want to show for your current state. For us, that was four different numbers—the 95th percentile, 80th percentile, median, and average. – Egor
Clean the data. For example, weekends were influencing our data a lot—so we removed them. That was one of the data-cleaning steps we took, but only to a certain extent, since we couldn’t fully remove non-working hours due to our hybrid work setup. – Egor
Use visuals as much as you can. Set the goals directly in the visuals. Show trends and moving averages—when people see the numbers and patterns, they’re more likely to take action. For example, we said, “This is our goal and this is a stretch goal.” Add other relevant information to help people understand the context better. – Egor
I like to think of data as a way to create an accountability dashboard. What does that mean? We ask for something to deliver some value—some expected outcomes. Right from the beginning—we ask: What is it supposed to do? You make a strong statement about what you’re expecting, and that clarity allows everyone to act appropriately. The kind of accountability dashboard usually involves little API calls—sometimes pushing data to a BI system in the cloud—as things happen. That helps us prove usage or collect other evidence that the value is actually being delivered. – Jon
Do not trust your data blindly. When you build dashboards and see improvements—challenge them. Say, “I don’t believe it,” and dig deeper. You’ll start noticing a lot of details over time. – Egor
Why you shouldn’t always believe the data—and why you need to challenge it every time—is something we’ve learned at Miro. I remember once I saw a significant improvement in our median, and I thought, “Wow, it’s ten times better!” But I said to myself: I don’t believe it. And later we found out it was just one single repository we had created with small PRs that were reviewed and merged in less than an hour—all pairs. And yeah—that’s what skewed the statistics. – Egor
There was still a signal there: work on smaller things. – Jon
That’s true. – Egor
If something looks awesome, it’s worth digging in to understand why—and whether it truly reflects what you’re measuring. – Jon
Go as deep into the data as you can—look into the details and find different ways to represent them. – Egor
The next level for us was creating buckets for different time slots—how fast our code is reviewed. For example, how many PRs were closed in the first hour, the next three hours, the next four hours, the next day, and so on. – Egor
When we added a filter by repo or team and started looking into it, we noticed problems with our CI in general. If something is really fast, people get to it and review PRs quickly. But if it’s slow, they forget about it, switch to another task, and the PR ends up getting merged only the next day. – Egor
Look at the dynamics—what’s happening over time. For example, we saw a drop that was due to a company-wide event lasting two or three days, when people stopped writing PRs. It’s not that things improved—we just understood the context. The holiday season is also clearly noticeable. But in some cases, you’ll see fluctuations, and yeah—something’s going on that could bring you insights. – Egor
Add filters in dashboards to view the data across different dimensions—like teams, domains, or anything else that’s valuable for you. That really helps. And it’s not just helpful for you, but also for others. Sure, you can observe the general data, but filters let you provide people with more relevant insights—whether that’s by team, by repository, or something else specific to their context. – Egor
We use data and dashboards to build accountability: We asked for this, we expected this—is it happening? – Jon
Focus on outliers—both at the team level and the PR level. This level of depth is key for optimization and identifying issues in the data. You won’t see these kinds of insights if you only look at averages. – Egor
We started by looking at the team level: What are the areas we need to talk more with? Focus on specific teams where PRs have been stuck for more than ten days, for example—that helps a lot. – Egor
By analyzing data at the PR level, we found many issues with processes or even wrong data points. We talked to people, and they told us things like, “Yeah, I just went on a two-week vacation and no one cared about the PR,” or “That was actually a draft PR—I just forgot to mark it as draft.” – Egor
Trust the data by verifying it. Sometimes you just take a look and know—this makes no sense. It’s probably because there’s something you weren’t aware of. Find out what it is. – Jon
Set the guardrails—signals that can help you see whether your influence on the main metrics isn’t making other things worse. Think about which guardrails will help you catch issues or side effects you're not predicting. – Egor
In our case, the guardrails were the number of PRs, Jira tickets, and the number of bugs. Maybe these weren’t the best examples—and actually, we didn’t see any influence on them. Or maybe that’s a kind of proof. I don’t know. – Egor
Impact is not just about the metrics themselves. I strongly believe that the heart of execution is about the people. We started as a small working group—but a very proactive one—because its members were the ones who had been complaining and genuinely wanted to solve the problem. Right people are the key to success in any initiative. – Egor
People will help you choose the right metrics, prioritize, collect information, and motivate each other to take small baby steps every day—while working in parallel. – Egor
I was leading, and every time I thought, okay, this week I have this meeting—I need to prepare something, I need to show progress. And I think the other guys were thinking the same. That helped keep us working as a team—iterating, bringing something to the table, and discussing it. That’s how we filtered. A lot of people were complaining, and we brought them into the team. Some of them brought great insights and then disappeared—because they didn’t have the capacity to influence things. – Egor
It’s very important to show progress—especially to the engineering community. Share what you’re doing, what you’ve achieved, what you’ve learned, and what insights you’ve gained. Be vulnerable and open to any feedback. A no-bullshit culture means we need to build and prove that culture through our actions. – Egor
More teams need to be honest and accountable—including the people asking for the work. Dashboards with data make that visible. – Jon
Don’t forget to look back and reflect on the results you’ve achieved. If you’re not satisfied—iterate. If you’re okay with the current state—put it on hold and let time define the next set of requirements for future improvements. – Egor
Listen to people—analyze what they’re saying. Make changes and show that you’re actually trying to solve the problem. You’ll get a lot of feedback, support, and learnings. Then show what you’ve done, and listen again. – Egor
It’s a challenge—the real world is messy. And I think it’s fantastic that, in this part of your world, you grabbed hold of it—even though it’s flailing and fighting to maintain the status quo. Sometimes you just have to keep pushing, keep working at it—chipping away bit by bit. – Jon
Prove the Culture of Action - No Bullshit
Never stop improving. Listen to your people. And just stop talking and start doing. – Egor
The no-bullshit culture still runs deep in many Mironeers—especially among the old-schoolers, as I call them. It all started because one guy challenged me: Stop complaining and fix this. That’s all. – Egor
People are very important - work directly with teams. You’ll find a lot of leaders within the teams who have real influence—and without them, you won’t be able to drive change. So you need to work through them, show examples, and bring them along. – Egor
More often than not, the team knows what’s going wrong—and they know what could be better, if you just let them. If we can build an environment where you don’t have to ask for permission—but also don’t blindside me—that’s where trust grows. That kind of culture is so key. – Jon
The right people are often the more collaborative ones. Sometimes you don’t necessarily need the top expert—because they’re not always the most collaborative. What you need are people who are interested and engaged. It’s kind of like fanning the flames—the moths will come. You just keep encouraging it, and it spreads. – Jon
What I noticed with the people we worked with is that they talked less and did more—and that was really nice to see. Sometimes that behavior is missing. Sometimes we spend too much time talking about alignment, the problem, what we might do… Just do it. If it fails, you’ll learn by tomorrow. And that’s all. – Egor
What I see—in a very beautiful nutshell—is, first off, the guts to call out the Head of Engineering. “Why are you complaining? You can do something about this.” And he did. A lot of people, at many different levels, can appreciate that moment of realization. And not only did he jump in, but he also got others involved. He delegated, pulled in converts, and rallied people. Because the truth is, in every team I’ve worked with, there are always people willing and happy to invest their time. Then you came up with a holistic view, building shared knowledge so others could align around it—and you kept working on it. Month after month, year after year. And when you do that… stuff happens. – Jon
I love your approach and what you’re doing—because at its core, it’s all about individuals and interactions over processes and tools. – Jon
October 2022—that was the start. By July, we had collected and defined all the code review conventions across the company. Maybe they weren’t fully aligned, but at least they were clearly stated, and we got commitment from teams to follow them. We also updated the process a bit. We reached about 50% of our goal-based metrics, as we faced significant issues with our CI/CD pipeline. We also improved our engineering satisfaction by 25%, and that was really nice to see. – Egor
By mid-2024, we had actually achieved almost all of our goals—not because of code review itself, but because other people worked hard on our tools improvements. The team did a really great job on that side. And by the end of last year, I checked again—the metrics were still looking good. – Egor
You should never stop iterating—and that’s really important. Why? Because with the knowledge you have today, you might ask: Can I do it differently? Yes. Will I do it differently? Yes. People change, and different people will engage with the problem over time, bringing new perspectives. That’s why it’s crucial to keep the knowledge from past experiences and continue evolving. – Egor
Tools and processes related to code review should always be improved. – Egor
You’ve built a safe, deliberately developmental environment—one where I feel comfortable going to the boss, to the Head of Engineering, and saying something relatively blunt. And you want that, right? I can tell you relish that. That says a lot about who you are. And sometimes—you’ve got to rope people in. – Jon
It’s a beautiful journey—and not an overnight success. It takes real effort and a willingness to dig into the things that hold us back. – Jon