aren't we missing something at the base of the pyramid:
> does it actually work?
I feel like the foundation of any code review has to be a bug analysis to ensure that the feature works, doesn't create regressions or fail in another supported use-case.
Potentially unpopular opinion, but I would much rather get well-formatted, performant, readable code that misses the business case a bit than get a ball of spaghetti mess that nails it. Not only will it be easier to update the code in the former, but the business pressure to just deploy the latter and "fix it later" can sometimes be too high to resist. I was just looking at a "proof of concept" with a 2017 commit date in production last week, the danger of that is all too real.
No, I mean like the primary function of every PR is to ensure the code does what its trying to do and doesn't (for example) fail to compile, or break the build or create an infinite loop or an out of memory exception.
On a dayjob closed-team project, if my coworkers are putting up PRs without making sure they meet functional requirements, that is a problem that needs to be solved several steps before the PR. If it's a junior person who doesn't know better, more working side-by-side with them to get to the PR. If it's a senior person dealing with a shitty spec, more discussion with PMs/designers before we start coding. If it's a senior person who's putting up broken code because they don't care...that's probably a management conversation.
I have to be able to trust that my teammates are doing their best to produce working components. Otherwise code review is a bandaid covering a hole in the hull of a sinking ship.
One of the first things I check on a PR is if it passed CI cleanly. If the build has failed (failing tests, build problems, whatever), I get annoyed I was even asked to look at it.
I think the purpose of code reviews is really information sharing, rather than attempting to identify and remove defects (which might occasionally happen, but is more reliably addressed via testing). On the one hand, the team has the opportunity to see how a new feature works, on the other, the implementer gets feedback on team norms.
> does it actually work?
I feel like the foundation of any code review has to be a bug analysis to ensure that the feature works, doesn't create regressions or fail in another supported use-case.