> For our team, every commit has an engineer's name attached to it, and that engineer ultimately needs to review and stand behind the code.
Then they claim (and demonstrate with a picture of a commits/day chart) a team-wide 10x throughput increase. I claim there's got to be a lot of rubber-stamp reviewing going on here. It may help to challenge the "author" to explain things like "why does this lifetime have the scope it does?" or "why did you factor it this way instead of some other way?" e.g. questions which force them to defend the "decisions" they made. I suspect if you're actually doing thorough reviews that the velocity will actually decrease instead of increase using LLMs.
Then they claim (and demonstrate with a picture of a commits/day chart) a team-wide 10x throughput increase. I claim there's got to be a lot of rubber-stamp reviewing going on here. It may help to challenge the "author" to explain things like "why does this lifetime have the scope it does?" or "why did you factor it this way instead of some other way?" e.g. questions which force them to defend the "decisions" they made. I suspect if you're actually doing thorough reviews that the velocity will actually decrease instead of increase using LLMs.