I don't want to anyone to build autonomous weapons, but I don't want anyone to build nuclear weapons or any other weapons of war either; I don't see how to avoid it. If the choice is to either develop and deploy autonomous weapons or to risk having your population conquered and murdered by enemies that use them, then there is no choice.
Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.
Perhaps the best response by governments is to use their resources to develop autonomous weapons countermeasures, especially those [EDIT: i.e., those countermeasures] that can be acquired and utilized by those with few resources: Towns, governments in poor countries, and even individuals.
Also, my guess is that it's an area ripe for effective international standareds, treaties and law. All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons and would work to enforce the rules.
I've had a shootout of sorts against a robot. The robot was armed with an airsoft gun, and I with a Glock pistol. The goal was not to kill the robot (since it was expensive and the owner and I had spent a long time getting the machine vision software workin) but to avoid being hit by the robot while engaging some other targets.
The course had to be carefully constructed to avoid an immediate robot victory, and the robot wasn't mobile. I wouldn't take the human side in a confrontation with an armed robot driven by a defense budget.
The disadvantage of a robot is limited mobility and difficulty distinguishing friends from foes, the same disadvantages which plague landmines. The advantage is that a robotic force could provide the same area denial as landmines without the long-term consequences: set 20% of the robots to come home and recharge every day, with a week-long battery life, and you've got a very short period during which problems can happen.
Both problems can be solved with today's technnology: make the robot airbourne to improve mobility, then tag friendly forces with some IFF broadcast. Declare a curfew and boom, everyone who's not a friendly is an enemy combatant.
Biological and chemical weapons are considerably easier to exclude by mutual agreement than AI, because the line is pretty clearly drawn. There is not much of an incremental path from dropping explosives do dropping gas containers. The line suggested here seems much more arbitrary and would be more like a wide grey area that would be pushed wider and wider into AI territory by including some form of alibi human interaction just as a formality. As described in the open letter, the "forbidden technology" would sit firmly sandwiched between the well established technology of "seek to kill" missiles that are fully autonomus once fired (as opposed to "seek, then autonomously decide to kill or not", which would be forbidden) and teleoperated equipment which is also explicitly allowed. The latter wont be stopped from getting better and better autonomous capabilities by mandating an operator to sign off kill decisions, which will eventually become a meaningless formality. If we want to avoid autonomous weapons, we need a more robust line than the one suggested.
It is easy to prevent it, have the UN ban it and provide incentives for countries to sign a treaty. This has worked for things like chemical and biological warfare. The key is to star the process now before generals get their hands on the technology so there won't be any pushback.
I'd argue that it's not that easy to "ban" something by getting the UN to say so, or even getting conventional treaties signed. First, the nations have to ratify it. And even if they do, they have to abide by it. Here are some random examples that I googled-for off the top of my head where these treaties, laws and the UN have failed.
Your linked article specifically stated tear gas was still legal for police use under the CWC. An expert rightly points out that this is illogical (and I agree), but it is not a failure.
Also, I should point out that the mace and pepper spray canisters that many people around the world carry for personal protection are also illegal chemical weapons under the CWC if used in combat.
There is a whole lot more illogic and inconsistency to be found between what is legal in something arbitrarily defined as "war" and otherwise, if one delves deep enough into the various treaties and conventions.
There's a question of whether one day you will be able to just "git pull 'terminator' " and install it on a cheap drone with an Arduino to pull the trigger on a mounted AK47.
I reckon the software will get more and more ubiquitous. You can already download image recognition software, maps, and all the other code you need. How far are we actually from being able to send a drone to do what contract killers used to do?
This is a good idea, in theory. I just wonder how controllable it would be, once AI is more of a ubiquitous technology. With nuclear and bio warfare you can ban certain substances. Development of safe nuclear energy has suffered from this, perhaps justifiably. But once there are APIs, Open Source libraries, etc. out there, how will we contain it?
Right, but for it to be successful, there's still a substantial amount of R&D to get the systems cooperating in a manner effective on the battlefield.
I agree, however, that if this were to go forward with a military bankroll, the result would be much easier to replicate. I'm particularly struck by this sentence from the letter: "If any major military power pushes ahead with AI weapon development...autonomous weapons will become the Kalashnikovs of tomorrow." That's terrifying.
No, that's not my argument. My argument is that you shouldn't leave your doors wide open simply because theft is illegal. It still happens, and so will actions with "illegal" weapons.
I'm not suggesting we should ignore it, I'm suggesting that aside from making us feel nice, it's not an effective "solution".
You may as well do the first two if you aren't going to do the latter. Fighting an opponent using AI is like fighting a modern army with spears because you think guns are evil.
This is such a technology leap it's not even funny. I appreciate the signatories' intent here and applaud them for higher level thinking but banning offensive AI first strike or counter strike ability is not going to happen because it puts people that ignore said ban far ahead of you in the ability to deal death department.
The US isn't going to dismantle it's nuclear arms, and neither is any other major power. Same story here, only these weapons are even scarier because of the fact that they are much more flexible.
It's nuclear arms strength without all that gooey radiation mess. AI could do everything from surgical strikes to full on massed combat without losing a countryman and dominating other nations on the battlefield. Nobody is going to get caught flat footed on that one.
> "The US isn't going to dismantle it's nuclear arms, and neither is any other major power."
It can be done. It happened in South Africa. It was also the subject of considerable debate in the last UK election. We don't have to keep them, as many people recognise they're very expensive for something we have no need for.
No it's more like having guns that are mounted around your village, but not allowing guns that can be mobilized outwardly. For a world trying to be more civilized, that's a noble and defensible position.
All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons
The US is not going to give up this capability. It's still not quite fully signed up to the landmine treaty.
We know how this will go: automated colonial "antiterrorism" enforcement. Like drone strikes today, only lower cost. Entire populations kept in line by the robots that hunt in the night. Objecting to the death robots and organising against it will be considered evidence of terrorism and result in your death, along with anyone who phoned you recently enough. Deployed from Turkey to Tripoli.
Bot+Human? You're describing a drone, and you're right, it works great. Might work better if they could upgrade the optics a few notches, but I'm sure that's already in the works.
Yes, I think Bot-assisted humans would be far more effective than either one alone. Imagine the cunning of a human brain, enhanced with the senses and reflexes of robotics.
But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.
I don't agree with that line of thinking but it would be quite a debate to have.
You make the counter-argument that their enemies will make the same argument to their populations, which lowers the bar for armed conflict on every side, and increases the odds of war coming to your homefront.
This is exactly why I'm afraid of this future. If the richest nation can send their robots to rape and pillage other countries with no threat to their own population, why wouldn't they?
It took public outrage over lost lives for us (the US) to pull out of a war that we were already losing (Vietnam). I can't imagine what it'd take if we were winning and not dying.
> But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.
Or it could go the other way with those people and families worried about losing their livelihoods.
> But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.
But can't the same argument apply to biological and chemical weapons? How did it come to be, that there are treaties banning them?
But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.
On the other hand, do soldiers really want to defend themselves against flying, high-speed IEDs with target-recognition software? I mean, I've seen malfunctioning drones move so fast that I lose sight of them. Does anybody really want to see one of these things come over a compound wall carrying a payload of high explosives, and software for identifying groups of human targets and dodging defensive fire?
Once you start an arms race, and once several big powers do the R&D, this would not be an easily controlled technology.
> On the other hand, do soldiers really want to defend themselves against flying, high-speed IEDs with target-recognition software? I mean, I've seen malfunctioning drones move so fast that I lose sight of them. Does anybody really want to see one of these things come over a compound wall carrying a payload of high explosives, and software for identifying groups of human targets and dodging defensive fire?
You basically just described a fire-and-forget missile, which is a technology that has been on the battlefield for over three decades.
A fire-and-forget missile is a single directional device with minor corrections for targeting. It can't hover, back up, select its own target, avoid return fire, etc. So, no we haven't had this tech for three decades.
> A fire-and-forget missile is a single directional device with minor corrections for targeting.
No it isn't. A fire-and-forget missile is a missile capable of dealing with every issue between the launching platform and the target. This much more complex than "minor corrections for targeting".
> It can't hover, back up
These are a function of a particular propulsion system, not guidance system. The vast majority of non-rotorcraft cannot hover or back up.
> select its own target
That is exactly what a fire-and-forget weapon does. The firing platform directs the weapon at a particular target to start, but the weapon makes the decision about what to hit. If it loses lock, it tries to reacquire. It does not necessarily reacquire the same target. In fact, you could blindfire most FF weapons and let the seeker pick a target in its path of travel, if you really wanted to. Rules of engagement typically prohibit this, but it is technically feasible.
> avoid return fire
Evasion is certainly something current weapons are theoretically capable of. It is not typically in the package, though, because it adds cost, size, and weight. Once these systems get to the point that they can be added to drones in a cost-effective manner they will likely be added to single-use weapon systems as well.
> So, no we haven't had this tech for three decades.
It has been a constant march of progress, but yes we have had weapons that can make targeting decisions for themselves for over three decades. The Mk-48 torpedo[1] has been in service since 1972 and has had since then the ability to travel a predetermined search pattern looking for targets and automatically attacking whatever it finds. The Mk-60 CAPTOR mine has a similar capability to discriminate and engage targets. The RGM-84 Harpoon[3] is launched by providing one or more "legs", then activating the missile's seeker to find and acquire a target; it is not actually fired "at" a particular ship in the conventional sense of the word.
> Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.
The only way for human adversaries to fight autonomous weapons would be with brute, lethal force (nuclear/neutron weapons). It ends poorly for all involved.
Pretty much everything the military uses has a measure of EMP shielding. We've know about it's effects for over 50 years now.
Signal jamming is an obvious weak point - one that disappears as autonomy is increased. Distributed control would reduce this issue, (as in, have a single soldier/operator manage 10-15 units). Eventually, you remove human control entirely, and along with it, this issue.
Which, to my knowledge, are only currently generated using a nuclear weapon. You might be able to create one using solid state gear with enough time, R&D, and power.
> You could use signal jamming.
Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof.
I understand that. My point was, there is no practical method yet to provide the energy required and appropriately direct EM energy at a target except through a crude weapon like an omnidirectional nuclear weapon
> "Which, to my knowledge, are only currently generated using a nuclear weapon. You might be able to create one using solid state gear with enough time, R&D, and power."
> "Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof."
Your idea of jamming is too narrow. Think about it like this, even if it's mostly automated, these machines still get sent signals to inform them of changes to their mission. That signal can be blocked and/or modified. Even satellite links can be altered, either you hack the satellite system or you intercept the signal at a higher altitude than the receiver is operating in.
>Even satellite links can be altered, either you hack the satellite system or you intercept the signal at a higher altitude than the receiver is operating in.
Or, if the case of total war, you blow the freaking satellites out of space with missiles. Yes, I know space weapons systems are technically banned, but how long do you think a nation like the US, Russia, India, or China would put up with satellite controlled autonomous drones running roughshod over their sovereign territory before they just blow the satellites out of space?
Satellites can actually be destroyed using weapons that aren't in space. Back in 1985, the US had a F15 launch a missile which took out a satellite in orbit. China also recently destroyed a satellite with a ship-launched missile.
>Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof.
American aeronautical engineers dispute this, pointing out that as is the case with the MQ-1 Predator, the MQ-9 Reaper, and the Tomahawk, "GPS is not the primary navigation sensor for the RQ-170... The vehicle gets its flight path orders from an inertial navigation system".[20] Inertial navigation continues to be used on military aircraft despite the advent of GPS because GPS signal jamming and spoofing are relatively simple operations.
Actually, any use of radio frequency at all is retarded. Propagation's CAN be stopped.
You just haven't had access to that information, or you have, and are providing disinformation for someone.
Not necessarily, especially if their cheap enough (and the beautiful thing about software is that its marginal cost is 0.) Think of them like bullets or bombs. And then you've eliminated that possibility of defending against them.
The biggest weakness of drones is that they cannot make decisions themselves; they need input, communication channels.
The military advantage of putting autonomous AI on drones is so that they no longer need to communicate with home base. The purpose of the AI is to eliminate the weakness of communications being jammed. The requirement to "receive new instructions" is eliminated.
Then how do you coordinate attacks? Even elite military units, deep behind enemy lines, have the ability to receive new intel. You aren't going to build a swarm of robotic generals, each fighting their own war, with no communication between them.
You're not going to launch these things with the order to "go fight the war" and hope to update them on the specifics later.
You're going to launch them with the latest intelligence on board manually uploaded, for missions less than 12 hours in duration. It's like firing a missile - you don't need to recall it once you've hit the red button.
So - AI 1 and 2 - drop 2x 500lb bombs on target at 6759 5974 at 03:12 hours. Go.
They complete the mission and head back. Even better, you give them 4x 500lb bombs and they figure out themselves how much to drop to destroy the target.
Communication worries are overblown, you just have to design around them.
What if you want to call the mission off? Let's say the enemy gets a few key hostages, and holds them in this building. They'll be killed by their own side.
Revokable weapons are weak; irrevocable weapons are strong. It's the same logic as mutually assured destruction, and evolutionarily similar to blind rage.
FWIW I believe autonomous weapons are inevitable because drones cannot be used against technologically sophisticated enemies that can jam them. The hard requirement for continuous communication is exactly what autonomy is eliminating.
The enemy didn't necessarily break the Geneva convention.
Pulling the trigger far in advance of the resultant action increases the risk of disaster, disaster that could've been averted based on the richer dataset available closer to the scheduled time.
They didn't have to. We're just going to say they did anyway because they are evil bastards (TM) and we can't possibly be anything but the good guys.
This scenario is the exact same scenario as a current ballistic missile launch. There are no safeguards for those systems that could be intercepted and interfere with the use of the weapon.
Send more drones to kill your own drones? If the drones can be fed new instructions in the field, then the enemy can feed them fake instructions to shut down.
Wouldn't it be feasible to build an autonomous weapon that doesn't target people, to fight the autonomous weapons that do target people? I would assume at that point whichever AI has better hardware and better algorithms would win, right?
Possibly? Anything past a few years out in tech is hard to predict. I wouldn't outright dismiss the idea, but its a crapshoot what machine intelligence is/isn't going to be able to do.
I'm an educated, practical tech professional and machine intelligence worries me more than any other technology out there (except possibly a virus paired with CRISPR CAS 9 for targeted, precise genome modification driven through a species' population).
Don't overlook the covert soldier, blending in with the population, taking a rifle to those building/launching/directing those autonomous weapons and those they care for. One guy infiltrating the homeland with a US$100 rifle and a case of ammo (about the size of a shoebox) can do enormous homeland damage against an enemy obsessed with >$100,000 drones operated by >$10,000,000 staff & facilities.
(That is one of several sufficient reasons why many Americans are obsessed with guns & self-defense. We predict, and see, increasing "spontaneous/lone-wolf" mainland attacks.)
Drone command-and-control facilities would surely be protected from a lone gunman, more-so, how would guns & self-defense protect against a targeted agent taking down someone important? (who presumably already has defense which already needs to be circumvented).
I'm failing to see the common area between targeted spec-ops style missions (and protection against those) and home/civil defense.
Actually, It's ridiculously easy to simply ship dormant AI into the country in boxes, have them establish operational state once here and have them sow the havoc you are looking to create.
Homeland c&c facilitates are certainly defended from terrorist actions, but less so from 20-30 kamikaze drones launched from within the victim country.
When you fight someone, the idea is to use their strength against them - the strength of the west is economic trade. All the security measures in the world won't stop fed ex. And if they do, well, in a way you've already won.
Drone defense is indeed a hot thing right now but it's not fundamentally different from protecting yourself from any other new type of threat. There's measures and there's countermeasures (http://petapixel.com/2015/07/23/anti-drone-systems-are-start...). At the point of (strong, general) AI though all bets are off the table.
Warfare is becoming more and more asymmetric and nuanced, that for sure. I'd posit some form of media training enabling one to be less vulnerable to say https://en.wikipedia.org/wiki/Information_warfare would do more good than rifles and bullets at home though.
Terror targets are basically useless though in a real conflict. A determined foe will simply ignore them.
My point is that for some shipping fees, you have a real, realistic and effective way of substantially reducing your enemy's ability to fight the war you are engaged in.
That's a real vulnerability that can be exploited.
Tactically to survive the immediate onslaught, perhaps, but strategically you don't fight autonomous weapons by attacking the weapons, but by attacking the people controlling them. 1 minute after the nuclear/neutron/EMP bomb has detonated, the next wave of killer robots is released from the hardened bunkers by the remote staff, and you're back where you started; it's the remote staff - and anyone/everything they care about - who must be taken down until surrender.
An "open borders" policy, tolerating & assimilating anyone who brazenly bypasses the checkpoints, is a gaping security void with a giant "STRIKE HERE" sign in flashing neon. [I don't say that to start that argument, but to point to the stark reality of the parent post's premise.]
They still need to be fed objectives/missions or something. Hopefully you are not suggesting to release robotic serial killers with no strategic purpose, are you?
> you are not suggesting to release robotic serial killers with no strategic purpose, are you?
It won't be my idea, but someone may do it. Consider someone without the resources or motivation to code the decision-making component, but they can code 'shoot every living thing' and drop the bot into enemy territory (preferrably far from their own territory).
Also, to some degree the AI can generate it's own objectives. Also, IIRC one objective of autonomy is for the AI to be able to identify and attack unforseen targets.
The cost of biological weapons is likely to marginalize once the know-how is public and with things like in-home sequencers we're well on the towards home-labs being feasible and cheap. The limiting factor right now might be ordering necessary chemicals/cultures but those too are soon to be easy to manufacture at home.
Other hunter AI-s, good ole flak cannons, something nano that just assimilates metal to replicate, hacking into their network and genociding the nation that made them ... the list is long and nasty.
The problem with rules is that someone always has it in their best interest to break them.
Unlike dropping a nuclear bomb, you could break the rules here for years without even being caught. It's more like Germany in the 1930's than the cold war.
> I don't want to anyone to build autonomous weapons, but I don't want anyone to build nuclear weapons or any other weapons of war either
FWIW, all of the scientists involved in creating the first nuclear weapons immediately after the first detonation began pushing for a ban on further nuclear armament, and since then all wars have been fought with conventional weapons.
I've been reading about the nuclear arms race and it is terrifying how often we came to destroying ourselves. I have possibly never seen greater evidence that there may be a god.
It's worth noting here that even Hitler and Stalin opted against deploying chemical weapons. Well, at least on the battlefield.
Of course both of them had direct experience of being a victim to those weapons. The same cannot be said for nuclear weapons I'm afraid. People forget how bad things can be given enough time.
Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.
Perhaps the best response by governments is to use their resources to develop autonomous weapons countermeasures, especially those [EDIT: i.e., those countermeasures] that can be acquired and utilized by those with few resources: Towns, governments in poor countries, and even individuals.
Also, my guess is that it's an area ripe for effective international standareds, treaties and law. All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons and would work to enforce the rules.