Nukes being a risk was suggested as a reason why the US was willing to invade Iraq for trying to get one but not North Korea for succeeding.
It's almost certainly more complex of course, but the UK called it's arsenal a "deterrent" before I left, and I've heard the same for why China stopped in the hundreds of warheads.
Nukes don't have a mind of their own. They're operated by people, who fortunately turned out sane enough that they can successfully threaten each other into a stable state (MAD doctrine). Still, adding more groups to the mix increases risk, which is why non-proliferation treaties are a thing, and are taken seriously.
Powerful enough AI creates whole new classes of risks, but it also magnifies all the current ones. E.g. nuclear weapons become more of an existential risk once AI is in the picture, as it could intentionally or accidentally provoke or trick us into using them.
You keep repeating this tired argument in this thread, so just subtract the artificial element from it.
Instead imagine a non-human intelligence. Maybe its alien carbon organic. Maybe it's silicon based life. Maybe it's based on electrons and circuits.
In this situation, what are the rules of intelligence outside of the container it executes in?
Also, every military in the world wargames on hypotheticals because making your damned war plan after the enemy attacks is a great way to wear your enemies flag.
How would you feel if militaries planned for fighting Egyptian gods? Just because I can imagine something doesn't mean it is real and that it needs planning for. Using effort on imaginary risks isn't free.
That's long covered already. Ever heard of the StarGate franchise? That's literally an Air Force approved exercise in fighting Ancient Egyptian gods with modern weapons :).
More seriously though, Egyptian gods are equivalent to aliens in general, and adjacent to AI, and close enough to fighting a nation that somehow made a major tech leap, so militaries absolutely plan for that.