Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At least nuclear weapons pose an actual existential risk as opposed to AI - and even then we don't just go to war.


Nukes being a risk was suggested as a reason why the US was willing to invade Iraq for trying to get one but not North Korea for succeeding.

It's almost certainly more complex of course, but the UK called it's arsenal a "deterrent" before I left, and I've heard the same for why China stopped in the hundreds of warheads.


Even if that were true for Iraq, which I doubt, it would have been the odd one out.

Btw., China is increasing its arsenal at the moment.


Nukes don't have a mind of their own. They're operated by people, who fortunately turned out sane enough that they can successfully threaten each other into a stable state (MAD doctrine). Still, adding more groups to the mix increases risk, which is why non-proliferation treaties are a thing, and are taken seriously.

Powerful enough AI creates whole new classes of risks, but it also magnifies all the current ones. E.g. nuclear weapons become more of an existential risk once AI is in the picture, as it could intentionally or accidentally provoke or trick us into using them.


AI could pose those risks, but it also could not - that is the difference to nuclear weapons.


AI does pose all those risks, unless we solve alignment first. Which is the whole problem.


I does not, because that kind of AI doesn't exist at the moment and the malevolence etc. are bunch of hypotheses about a hypothetical AI, not facts.


You keep repeating this tired argument in this thread, so just subtract the artificial element from it.

Instead imagine a non-human intelligence. Maybe its alien carbon organic. Maybe it's silicon based life. Maybe it's based on electrons and circuits.

In this situation, what are the rules of intelligence outside of the container it executes in?

Also, every military in the world wargames on hypotheticals because making your damned war plan after the enemy attacks is a great way to wear your enemies flag.


How would you feel if militaries planned for fighting Egyptian gods? Just because I can imagine something doesn't mean it is real and that it needs planning for. Using effort on imaginary risks isn't free.


That's long covered already. Ever heard of the StarGate franchise? That's literally an Air Force approved exercise in fighting Ancient Egyptian gods with modern weapons :).

More seriously though, Egyptian gods are equivalent to aliens in general, and adjacent to AI, and close enough to fighting a nation that somehow made a major tech leap, so militaries absolutely plan for that.


> AI could pose those risks, but it also could not

"I'm going to build a more powerful AI; don't worry, it could end the world, but it also could not."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: