If you're asking about the moral reaction here, I think it depends on how one views Copilot. Does Copilot create basically original code that just happens to include a few small snippets? Or does Copilot actually generate a large portion of lightly changed code when it's not spitting out verbatim copies of the code? I mean, if you tell Copilot, "make me a QT compatible, crossplatform windowing library" and it spits out a slightly modified version of the QT source code and if someone started distributing that with a very cheap commercial license, that would be a problem for the QT company, which licenses their code commercial or GPL (and as QT a library, the QT GPL forces user to also release their code GPL if they release it, so it's a big restriction). So in the worst case scenario, you can something ethically dubious as well as legally dubious.
Copilot should be inspiring people to figure out how to do better than it, not making hackers get up in arms trying to slap it down.
Why can't we do both? I mean, I am quite interested in AI and it's progress and I also think it's important to note the way that AI "launders" a lot of things (launders bias, launder source code, etc). AI scanning of job applications has all sorts of unfortunate effects, etc. etc. But my critique of the applications doesn't make me uninterested in the theory, they're two different things.
A naive developer thinks that they are the source code they write (you're not), and their source code leaking to the world makes them worthless. (Which isn't true, but being that invalidated explains a lot of the fear. Which, welcome to the club, programmers. Automation's here for your job too.)
Still, some of the moral outrage here has to do with it coming from Github, and thus Microsoft. Software startup Kite has largely gone under the radar so far, but they launched this back in 2016. Github's late to the game. But look at the difference (and similarities) in responses to their product launch posts here.
A naive developer thinks that they are the source code they write (you're not), and their source code leaking to the world makes them worthless.
Maybe Github isn't violating the licenses of the programmers who host on them. Maybe Copilot doesn't just spit out code that belongs to other people. Those are matters of interpretation and debate.
But if Github was doing this with Copilot, virtually an open source programmer would have a reason to be upset. Open source programmers don't give their code out for free they license it. This is a legal position, not a feeling. "Intellectual property" may be a pox on the world but asking open source developers to abandon their licenses to ... closed source developers, is legitimately a violation.
And before the spitting out source code problem appeared, I recall quite a few positive responses to Copilot. Lots of people still seem excited. And yeah, people are looking at the downside given Microsoft's long abusive history but hey, MS did those thing.
You've answered your own question. They went under the radar and nobody cared about them. They're not the multibillion company that sued Mike Rowe and keeps ReactOS developers awake at night.
If you're asking about the moral reaction here, I think it depends on how one views Copilot. Does Copilot create basically original code that just happens to include a few small snippets? Or does Copilot actually generate a large portion of lightly changed code when it's not spitting out verbatim copies of the code? I mean, if you tell Copilot, "make me a QT compatible, crossplatform windowing library" and it spits out a slightly modified version of the QT source code and if someone started distributing that with a very cheap commercial license, that would be a problem for the QT company, which licenses their code commercial or GPL (and as QT a library, the QT GPL forces user to also release their code GPL if they release it, so it's a big restriction). So in the worst case scenario, you can something ethically dubious as well as legally dubious.
Copilot should be inspiring people to figure out how to do better than it, not making hackers get up in arms trying to slap it down.
Why can't we do both? I mean, I am quite interested in AI and it's progress and I also think it's important to note the way that AI "launders" a lot of things (launders bias, launder source code, etc). AI scanning of job applications has all sorts of unfortunate effects, etc. etc. But my critique of the applications doesn't make me uninterested in the theory, they're two different things.