Peer Reviews Reset

First, a little roast (might remove later)

This is  Gitlab's definition , and very likely yours as well, of a code review
A code review is a peer review of code that helps developers ensure or improve the code quality before they merge and ship it.
In a very narrow sense, that is very true. Unfortunately though, given that our software engineering culture has been way too focused on code/software as their primary business, the above definition can as well be an alarming sign of that error.

The article goes on to list the goals of Code Review process as to..
Share knowledge, Discover bugs earlier, Maintain compliance, Enhance security, Increase collaboration and Improve code quality.

Apart from the Improve code quality goal there, the hypothesis has not stood the test of time. I can rarely point a finger at a code review that has effectively helped me gain substantial knowledge or actually "discover" a bug. Compliance and Security in there sound like they were shoved into the list for the sake of completion.

Effective sharing of knowledge, maintenance of compliance/security and reducing your unnecessary bugs rate (yes, there are bugs we introduce knowingly/calculatedly and those are okay), they all happen within a culture of collaboration, communication and sometimes even documentation. Thinking of code reviews as a tool that's thought to primarily target all these goals is overstretched and is as well wrong. It is wrong because it anchors our thinking, although subtly, to the less relevant of what we actually must be paying attention to and achieving with those reviews.

That anchoring is why I left out Improve code quality above. Not because we have actually achieved that goal, but because it certainly has become the main focus, perhaps even an obsession, to many.


Red Heart You roast those who you love

Why do I pick on that article from Gitlab? Because:
  • I'm a happy customer of Gitlab
  • Their article is very well written and represents how the majority of us think about the topic. So it is used as a "literature review" here.
Needless to say, how we got there is not Gitlab's fault. It's the engineering culture we have built.


Partying Face Pair Work (bit out of scope; might move out later)

There's a very important point in that article that I cannot skip over without celebrating. Using Pair Programming as an approach to doing Code Reviews is the best approach (when feasible) I have experienced so far. Coupled with the right focus, its greatest strength is that it fosters collaboration within the team. I often invite to extend it beyond programming, to do Pair Working (or just Pairing).
Nothing is more effective and collaborative than an engineer and a designer sitting together on a task. Or product manager and an engineer putting their brains together on a problem.

Start doing that, and do it more!

Alright, let's get into the meat of it now.

Bad code reviews

As we have had decades of Software Engineering culture that pushed most of us to focus on making code/software our primary business, our Peer Reviews turned into a wasteland for energy and attention spent on the most irrelevant. We called it Code Review, and it's likely that the name by itself helped anchoring us in the wrong direction too.

From over a decade and a half of coding and reviewing code, if I'd try to gauge where the energy and time in those Code Reviews is actually spent, it would be in ballpark of
50% opinionated code-level chitchat
Discussions on how to write a piece of code or design a class, just slightly differently, but supposedly (by each participant's view) with a much higher "code quality" gained or sustained. This is the one most engineers get lost in, wasting valuable time and energy for no significant improvement to what really matters.

20% automate-ables
Keystrokes and brain cell power wasted on pointing out what you can easily automate (since decades now) in your coding environment or your code review platform. Typos, missing basic type definitions, violations of deemed-important code formatting/conventions etc.

20% day dreaming
Out of scope, too broad and often futuristic estimation/suggestion of what this code (or often the overall architecture) should look like under use-cases or circumstances that no only do not existing in the present, but often the reviewer has probably very little evidence/confidence that they will in the near future.

10% good comments
This still happens of course. I left it here to point out also one more bad habbit we seem to sometimes build/maintain. That is, just because you leave one or two good comments in your review, that does not mean you are now entitled to add 20 more irrelevant comments of the kinds mentioned above. Keep it at good comments.

As a reviewer, my main responsibility is to protect the quality of the code being shipped
So that we keep our code and architecture in a perfect shape
Perfect code/design is neither your nor the author's main goal. No one's in fact. There's no such thing, so stop thinking about it altogether.

That error in focus/motivation behind the practice increased the waste of talent, attention and time, contributing to the negatively-viewed sides of code reviews: increased time to ship and very lengthy reviews.

Avoid them by putting your attention and energy in the right place. Aim at always giving good reviews.

Good code peer reviews

As a reviewer, my main responsibility is to increase the confidence in delivering the value the author plans to deliver
So that we keep delivering good value continuously, with as little unnecessary damage as possible
Good reviews consist of 100% of good comments. You leave good comments by putting attention to what matters. In order of priority, what matter is: shipped value, follow up actions and learning.
70% Shipped Value (~20 minutes of 30 minutes review block)
This requires that you understand the acceptance criteria, the context, the final goal/outcome and the main compromises the author of the code might have gone to make in order to deliver that outcome in a decent and timely fashion.

20% Context-based follow up actions (~7 minutes of 30 minutes review block)
When your thoughts and suggestions do not add to the value shipped, but are still important to address later according to your team's agreements or are important to discuss, then linking to the context is a useful thing to do with a suggestion of how to proceed outside of the scope of the review itself.

E.g. Schedule an ad-hoc pairing session or draft up a follow-up ticket.

10% Learning (~3 minutes of 30 minutes review block)
Opportunities you spot that can help you learn something together.
A link to internal documentation of conventions, practices or guidelines that the author may not be aware of or that you think may be interesting to re-discuss/update. A link to external article, general practice to get informed about together. An invitation for an open chat about an interesting idea/insight that the author might have inspired in you through their code. And so on.
Again, these rarely should prevent the reviewer from "approving" the code. They are merely an invitation for further interesting conversations and decisions, outside the scope of the review.

Make it practical

Some ways to improve your review process and anchor your thinking towards the relevant things:
  • Think about what's relevant. Remember you are there to deliver value, not perfect code.
  • Do more Pair Work (pair programming, pair reviewing etc)
  • Develop a simple resolution guide for the review step. For example  🚦The Traffic Light system 

Any other advise you have? Happy to hear them and chat more  sarhan.alaa@hey.com