Code reviews as a tool to build capability
Why are you doing code reviews?
Why do we do code reviews?
It seems like an obvious question, but I’ll bet if you asked everyone on your team, you’d get different answers. Some might say “to catch bugs”, others might say “to maintain code quality” and the more process-oriented engineer might just say “because it’s required by our workflow”.1
Some teams are questioning whether formal reviews are even necessary. They point to practices like pair programming, where two developers write code together in real-time, or trunk-based development, where small, frequent commits reduce the need for heavyweight review processes. "Why review code after it's written," they ask, "when we could collaborate while writing it?"
All reasonable perspectives. Whether you're doing formal pull request reviews, pairing at the keyboard, or committing straight to trunk, the underlying question remains the same. Are these interactions helping your team get better, or just a step in the way of shipping things?
When you optimize purely for bug-catching or deployment velocity, you miss the biggest opportunity these practices provide, building your team's collective capability.
What if the purpose of code reviews isn't to prevent problems, it's to share knowledge? What if every review became an opportunity to level up everyone involved, not just the code?
The Gatekeeping Trap
Traditional code review culture (here’s a diff, please review!) can suffer from what I call "single-point-of-failure thinking." One person (usually the most senior) becomes the arbiter of quality, creating invisible hierarchies and knowledge bottlenecks.
Here's what gatekeeping code reviews typically look like:
The Perfectionist Review: Every variable name questioned, every architectural choice debated, every deviation from personal preference highlighted. The reviewer demonstrates their expertise by finding fault.
The Rubber Stamp Review: Quick approvals with minimal engagement. The reviewer trusts the author but misses teaching opportunities. You’ll see this with LGTM and Shippit style comments.
The Style Police Review: Obsessive focus on formatting, naming conventions, and tooling configurations while ignoring the actual logic and design decisions. Don’t do this - use linters to automate this!
The Silent Treatment Review: No comments at all; just approval or rejection, leaving the author OK to press “merge” but with lingering doubts.
Each approach optimizes for different things (thoroughness, velocity, consistency, efficiency), but none optimize for the thing that matters most in the long term: developing your team's collective capability.
Compassionate Accountability in Code Reviews
In a previous post, I explored how accountability doesn't have to be fear-based. The same principle applies to code reviews. Instead of "holding people accountable" by catching their mistakes, we can create accountability through shared learning and collective ownership of code quality.
Compassionate accountability in code reviews means:
Shared Responsibility: Quality isn't the reviewer's job or the author's job - it's everyone's job. The review process becomes a collaboration to make the code (and the team) and ultimately the value shipped to customers better.
Growth-Oriented Feedback: Comments focus on helping people learn and improve, not just pointing out problems.
Psychological Safety: People feel safe to ask questions, admit uncertainty, and experiment with new approaches.
Systemic Improvement: Reviews identify patterns that need addressing at the team or organizational level, not just individual fixes.
It doesn’t have to be code reviews…
Code reviews through pull requests isn’t the only way. The principles of knowledge-sharing apply whether you’re doing pull requests, pair programming or trunk-based development. The format changes, but the opportunity remains the same.
Pair Programming: This is knowledge sharing in real-time. The same principles apply: explain your reasoning, ask questions about approaches, make your thinking visible. The difference is immediate feedback loops and shared ownership of decisions.
Trunk-Based Development: With smaller, more frequent commits, the knowledge sharing happens in bite-sized pieces. Commit messages become more important as teaching tools. Brief async discussions replace lengthy review threads, but the focus on learning remains.
Traditional PR Reviews: The format most teams know, but with intentional focus on knowledge transfer rather than just approval gates.
Knowledge Sharing
Here’s some ideas on how to transform your code review culture from gatekeeping to knowledge sharing.
Reframe the purpose
Be clear that the purpose of code review is to “share knowledge and build capability”.
If everyone agrees that learning is the purpose of code reviews, then every review becomes valuable regardless of whether problems are found. A “perfect” piece of code becomes an opportunity to understand and document good patterns. A buggy piece of code becomes a teaching moment for everyone involved.
Change the Language
The words we use shape the culture we create (linguistic relativity!). Strive to move from gatekeeping language to create conversations instead of judgement.
"This is wrong" → "Here's another approach to consider"
"You should..." → "I've found success with..."
"Fix this" → "What do you think about...?"
"Approved" → "Thanks for teaching me about..."
This isn’t about being artificially nice, it’s about creating an environment where folks are happy to ask questions. For example, “I noticed you used a factory pattern here - could you walk me through why?” is a much better learning opportunity than “you should just use automatic dependency injection”.
Frame your review
If you’re the author of a PR, then frame it to help reviewers learn. Include:
The context about the problem you’re solving
The alternative approaches you considered and why you chose this one
Areas where you’d value feedback
Questions about the parts you’re uncertain about.
And if you’re on the other side as a reviewer, then you should be making your thinking visible (see Supporting your Mid-Level Engineers for why I think this is so important!).
Explain the “why” behind the suggestions, not just the “what”
Share relevant experiences or resources
Ask questions about the author’s reasoning
Highlight things you learned from the code.
Progress over Perfection
It’s easy to swing the other way and to create endless discussion and end up using code review as a complete computer science curriculum. You’ve got to find the right balance between capability building and shipping value to customers.
Google’s code review guide has two points which I think are excellent.
reviewers should favour approving a CL once it is in a state where it definitely improves the overall code health of the system being worked on, even if the CL isn’t perfect.
Reviewers should always feel free to leave comments expressing that something could be better, but if it’s not very important, prefix it with something like “Nit: “ to let the author know that it’s just a point of polish that they could choose to ignore.
I can’t tell you where this balance is for you, but if you look at the signals you are seeing (what are your engineers saying?) you can adjust!
Create learning loops
Knowledge sharing should create learning loops to build team capability
Individuals get better at writing and reviewing code through deliberate practice and learning
Teams should develop a shared understanding of the domain, patterns, standards and approaches through repeated exposure and discussion.
Organizations should see systematic problems surfaced that can be addressed through tooling, training or process change.
Is it working?
The ultimate test of changing your code review culture is whether it’s working (obviously) and there are signals you can find. If this is working, then you should be able to see:
Reduction in knowledge silos (reviews are spreading knowledge!)
Reduction in onboarding time (new team members should find an environment that builds their capabilities from day 1)
Team confidence increases when working in different parts of the code base
And ultimately an increase in quality (as a side-effect of building capability).
You can measure all of these by speaking to your engineers regularly! And with those metrics, you can make your own judgement call whether thoughtful learning reviews pay back the investment in time.
The Compound Effect
When code reviews become knowledge-sharing engines rather than quality gates, the end result is the same (hopefully), but the reasons are different. Quality improves because you are building the capability within the team, rather than introducing a blocking process that relies on a few senior folks to keep quality levels high.
Junior developers learn faster because they're actively mentored through every contribution. Senior developers stay sharp because they're constantly explaining and justifying their approaches. The team develops shared mental models that reduce miscommunication and increase consistency.
Most importantly, you create a culture where learning is valued, curiosity is rewarded, thinking is visible and everyone feels responsible for the team's collective success.
Beyond the Individual
Code review culture is a mirror for your broader engineering culture. Teams that treat reviews as gatekeeping often struggle with knowledge silos, blame culture, and resistance to change. Teams that embrace reviews as knowledge sharing tend to have stronger learning cultures, better collaboration, and more resilient systems.
How do you get started?
Start by discussing the purpose of code review in your team
Change your language and role model reviews as learning opportunities
Take every opportunity to find and amplify learnings that happen as a result of code reviews.
The shift from gatekeeping to knowledge sharing isn't just about making code reviews more pleasant (though it does that too). It's about recognizing that in complex systems (which software most certainly is), the most valuable asset isn't perfect code - it's a team that can continuously learn and adapt.
And the super cynical might say “we’re doing code reviews as a tool to make ISO compliance less painful”


