Intelligence failure—it’s one of the most common mistakes in war. And recently we learned that it led to the deaths of two Western hostages—one American, one Italian—in January.

Quite simply, the CIA drone operators didn’t know that aid workers Warren Weinstein and Giovanni Lo Porto were present at their target—an al-Qaeda compound near the Afghanistan-Pakistan border. President Obama expressed regret over the incident but chalked it up to the fog of war.

To err is human. Yet a debate brewing in the international community overlooks this fact.

Representatives from around the world have gathered in Geneva to discuss “killer robots”—lethal autonomous weapon systems that, once activated, can select and engage targets without further intervention from a human operator. The diplomats in Geneva are debating whether to create an international convention to ban these kinds of weapons.

Supporters of the ban worry that, unlike humans, lethal autonomous weapons won’t be able to distinguish between civilians and soldiers. They envision bad robots roaming the streets and killing indiscriminately.

These fears are overblown. Steven Groves of The Heritage Foundation notes that we are not talking about Cyberdyne Systems’ Skynet killer robots. We’re talking about weapons like the U.S. Navy’s Phalanx Close-In Weapon or Israel’s Harpy. The former is a computer-controlled gun system to destroy incoming anti-ship missiles. The latter is an unmanned combat air vehicle that uses a “fire and forget” missile to destroy enemy radar stations.

In a Harpy strike, any civilians inside that radar station might die. But the Harpy itself poses no greater risk of civilian casualties than a manned aircraft or a human-controlled drone. Civilians at a military target—their presence unknown—are at equal risk from any form of attack.

This is why these systems in particular, and others like them, don’t violate the principle of proportionality, as argued by supporters of a ban on lethal autonomous weapons. That principle prohibits combatants from launching attacks against military targets if the “collateral” damage is “excessive.” But it’s ludicrous to suggest that the Navy’s Phalanx threatens civilians at all. Will they be riding a ship’s missile in flight? Likewise, as already explained, the Harpy air vehicle is no more likely to kill a civilian than a drone or manned aircraft would.

Moreover, no excessive civilian casualties would take place—unless someone placed those radars in or near population centers. In that event, the blame should be placed on those who put their own populations at risk, not on the kind of weapon used.

To garner support, advocates of a ban on lethal autonomous weapon systems concoct hypothetical worst-case scenarios in which robots mistakenly kill children running from combat zones. In reality, the systems being designed by the U.S. are intended for use far away from civilian populations. They are weapons designed to attack tank formations and warships that operate in the wide-open spaces of deserts, rural areas and the open seas—places devoid of large concentrations of civilians.

Certainly, there are no plans to deploy a Skynet killer robot soldier that can’t tell a child from an RPG-wielding terrorist. So why would we want to base real-world military planning on the imaginary threats of sci-fi thrillers?

No matter what we do, we can never completely eliminate the human factor from warfare. Whether it is a drone, anti-armor missile or sentry gun, a human always will be somewhere in the decision chain. For that reason alone, we can expect that mistakes, like the one that killed the unseen aid workers in the al-Qaeda compound, will continue to happen.

But that is precisely why we must admit that tinkering with technology bans will not stop civilian casualties. The human factor will always be present, which only means that mistakes will always be made. We might as well admit it: A ban on lethal autonomous weapon systems will no more remove error from human warfare than attempts to ban warfare itself will stop war.

Originally published in The Washington Times