They also added lots of technical debt as I'm sure they used the AI to generate tests and some of those tests could be actually testing bugs as the correct behavior.
I've already fixed a couple of tests like this, where people clearly used AI and didn't think about it, when in reality it was testing something wrong.
Not to mention the rest of the technical debt added... looking at productivity in software development by amount of tasks is so wrong.
Must have seen AI write the implementation as well?
If you're still cognizant of what you're writing on the implementation side, it's pretty hard to see a test go from failing to passing if the test is buggy. It requires you to independently introduce the same bug the LLM did, which, while not completely impossible, is unlikely.
Of course, humans are prone to not understanding the requirements, and introducing what isn't really a bug in the strictest sense but rather a misfeature.
> it's pretty hard to see a test go from failing to passing
Its pretty easy to add a passing test and call it done without checking if it actually fails in the right circumstances, and then you will get a ton of buggy tests.
Most developers don't do the start out at failing and then to passing ritual, especially junior ones who copies code from somewhere instead of knowing what they wrote.
> They also added lots of technical debt as I'm sure they used the AI to generate tests and some of those tests could be actually testing bugs as the correct behavior.
Let's not forget that developers some times do this, too...
I've already fixed a couple of tests like this, where people clearly used AI and didn't think about it, when in reality it was testing something wrong.
Not to mention the rest of the technical debt added... looking at productivity in software development by amount of tasks is so wrong.