The best way to prevent the future is to invent it.
Paraphrasing Alan Kaye here. There's a certain inevitability to AI. The proverbial cat is out of the bag. Our choices are simply to be part of this revolution or to be left behind while the likes of China and others, not blocked by mildly hysterical ethical activists protesting their own irrelevance, plow ahead. It's really that simple.
I call out China here because 1) they have lots of people actually working on AI. 2) a lot of the hardware we use for AI is actually being created there. 3) they have a history of getting stuff done once they decide they want to do something.
Delaying tactics, insisting it is done right, getting upset about things changing, fearing the loss of control, and similar sentiments simply aren't constructive. It's not going to stop this. If you want it done right, make sure you are involved in the doing.
I'm not aware of anyone who wants to "prevent AI" - as incredibly ambiguous as that phrase is.
What I suppose some people worry about is stuff like:
1. That they will have some means of feeding themselves and their families going forward
2. That they won't be at the whim of increasingly totalitarian (and arbitrary) governments with increasing power
3. That they will continue to find human connection
4. That they won't be overwhelmed by lots of crap content
If someone _thinks_ there's a problem, there _is_ a problem: It's pretty tricky to talk people out of their fears. You can tell them to shut up, or you can address their fears and make them feel better about the future. That typically includes putting some checks and balances in place, as well as adjusting their expectations.
China started doing mass surveillance early on, they're probably better at the technologies involved than the average country. By your logic, other countries should stop whining about it and start doing the same (or more), to not be left behind. I don't think everyone needs to do the worst thing that's technically possible to do. Sometimes those things aren't even all that beneficial for the success of a country, which I'd measure in citizen satisfaction and safety from hostile countries ultimately.
"the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die"
"Shut down all the large GPU clusters...be willing to destroy a rogue datacenter by airstrike"
I've seen many people on Twitter and Reddit calling for a total ban of image generators, because they take away jobs from artists. Yes - they think that even "ethical" models from Adobe and Getty should be banned.
If nobody addresses their fears in a reasonable fashion, I guess they're gonna find their own solutions. Like smashing machines according to TFA.
But that's a specific subset of generative AI they object to. Maybe there are people who oppose _all AI_ (including symbolic AI like pathfinding algorithms), but I'm not aware of any.
China actually has regulation for safe AI. It's hard to say how "safe" that regulation is. But it's better than what US is doing with OpenAI. Essentially do whatever lol.
They also use AI to monitor and police their own citizens. AI serves the common good there and is a tool that is wielded by the elite that dictates what that common good is. Nominally in their subject's name of course. But it's not a democracy.
If you look inside police departments, courtrooms, etc, you'll find a lot of AI in the US. That includes monitoring. Cities like NYC are under constant surveillance on the street, below the street and in the air. Even outside of the cities, Ring, for example, partners with police departments to deploy Ring cameras that they can access from people's doors.
And that's just surface level government usage. Go on, in, or near any of the "elite's" assets, and you will be recorded and analyzed from a dozen angles, whether those assets are physical or digital.
The monitoring infrastructure might be largely built and/or deployed by private companies, but those companies know who their paying customers are: entities that are governmental, non-governmental and entities that blur the lines, that want to monitor and police citizens.
Paraphrasing Alan Kaye here. There's a certain inevitability to AI. The proverbial cat is out of the bag. Our choices are simply to be part of this revolution or to be left behind while the likes of China and others, not blocked by mildly hysterical ethical activists protesting their own irrelevance, plow ahead. It's really that simple.
I call out China here because 1) they have lots of people actually working on AI. 2) a lot of the hardware we use for AI is actually being created there. 3) they have a history of getting stuff done once they decide they want to do something.
Delaying tactics, insisting it is done right, getting upset about things changing, fearing the loss of control, and similar sentiments simply aren't constructive. It's not going to stop this. If you want it done right, make sure you are involved in the doing.