Like most questions on ask HN the answer is: it depends.
Mostly it depends on a) what level of quality you need, b) what are you currently doing for quality control, and c) what other alternatives do you have?
On a): What is your customers bug tolerance rate? What part of the codebase is this? Most people will be ok with a few application crashes or a feature not working in some edge cases but be irate if you corrupt their data. Are you just trying to MVP or prove an idea? Is the app going to be around in 10 years etc? Are you building a framework or api that is going to be a building block or a user facing app? Huge difference.
On b): If you are already unit testing then that's 30-40% of dev time. Add another 10% for integration testing and you are already at basically half the effort being quality control. Add another 25% of total time on top and suddenly you are looking at spending 60%+ of the dev effort on quality control. Is that a good use of time? Depends on a) and c) :).
On c): For example perhaps you would be better off getting rid of code review entirely and using the savings to hire dedicated QA people. Or perhaps formal reviews would work better. Or maybe more time on design or talking to your customers. I'd personally rate all three as more effective.
Code reviews are very bad at finding bugs; that is not their purpose.
Code reviews are great for finding design problems, identifying weaknesses in colleagues (so you can help them increase their skills), keeping everyone knowledgeable about what is going on in other parts of the stack, cross-training, keeping the code readable and maintainable (if I can't easily&quickly code review your code, it is probably going to be a black box to everyone in 6 months) and so on.
Not that they don't find bugs, they do. Just that is far from the primary value or the best way to find bugs.
My point is that there are plenty of other quality control techniques that may be better suited to your project and team.
E.g.
"Code reviews are great for finding design problems"
Assuming you are talking code design problems: So are formal reviews, peer programming, or having multiple developers work on the same parts of the codebase.
"identifying weaknesses in colleagues (so you can help them increase their skills)"
Peer programming is great at this too, as is having a technical chat over a coffee, as is providing 20% time, as is regular technical training, as is allowing your devs time for their own l&d, as is having programmers work on the same parts of the codebase, etc, etc.
"keeping everyone knowledgeable about what is going on in other parts of the stack"
Technical walkthroughs, more consistent abstractions, and/or better doco are also great methods.
Code reviews and unit testing hold this high status place in our profession to the point where most other quality control techniques have been thrown out the window.
At my last company the single most effective policy for quality control we had was: Sleep on it (unless it was an emergency production fix - in which case we don't care about quality).
As an example if I am writing a fairly small (<50kloc) user facing internal web application where I can work directly with my users, and I'm going to stick around a while, and I don't know the domain real well I'll usually spend 20% of the effort on requirements, 10% of the effort on design, 10% formal requirements and design reviews, 10% on integration/feature tests, 5% prototyping, 5% doco, and the remaining 40% of time on dev. This means I do no code review and have no unit tests. It works very well in this context and allows rapid delivery of business value.
If you take this same configuration and apply it to framework code you are releasing to other internal dev teams then it wouldn't work (and they would hate you).
Thanks for the detailed feedback. I agree with a lot of your points, but I don't think adding QA people addresses the same need as peer code review. QA can find bugs in the current code whereas code review is more about finding issues that may lead to bugs further down the road. Even when there are defects in the current code, they may be edge cases that QA is unlikely to unearth but that will come out in a thorough code review.
None of the techniques we have are exact drop in replacements of one another. But they are all techniques that improve quality. I've seen a good QA person in the right environment be 100x as valuable as all the unit tests and code review combined.
Thats why the answer is really heavily dependent on your team and your project. The key though is considering what you aren't doing because you are focusing so heavily on code review.
Mostly it depends on a) what level of quality you need, b) what are you currently doing for quality control, and c) what other alternatives do you have?
On a): What is your customers bug tolerance rate? What part of the codebase is this? Most people will be ok with a few application crashes or a feature not working in some edge cases but be irate if you corrupt their data. Are you just trying to MVP or prove an idea? Is the app going to be around in 10 years etc? Are you building a framework or api that is going to be a building block or a user facing app? Huge difference.
On b): If you are already unit testing then that's 30-40% of dev time. Add another 10% for integration testing and you are already at basically half the effort being quality control. Add another 25% of total time on top and suddenly you are looking at spending 60%+ of the dev effort on quality control. Is that a good use of time? Depends on a) and c) :).
On c): For example perhaps you would be better off getting rid of code review entirely and using the savings to hire dedicated QA people. Or perhaps formal reviews would work better. Or maybe more time on design or talking to your customers. I'd personally rate all three as more effective.