While I appreciate that this is a large advance, I’m worried that these releases will make the internet completely worthless after some time. If AI can come up with fake news, fake text, fake videos and pretty much anything the user wants it to, then we will be flooded with biased content that’s untrustable. There’s probably some critical percentage of AI generated content on the web that guarantees this happening. (I’m guessing it’s around 40%)
Their release strategy is to provide lower quality models to the public while giving research partners access to the full models. The goal of this approach is to let researchers devise methods of detecting and counteracting this new technology. It’s kind of like “this technology is going to exist so we need to prepare responsibly.”
I've seen a lot of posts along these lines but I'm unclear as to what specific scenarios this technology precipitates. Like what, concretely, is the concern? There's already a lot of bad content online, and anyone who cares about information quality already relies on filteration through human editors. Like I can buy the idea in principle that adding orders of magnitude more noise in the system might fundamentally distabilize it... Again. But it's really not clear to me?