Rock, Paper, Killer Robot …

Please follow and like us:

Interesting developments in killer robot country:

From Gizmodo on March 5:

U.S. Army Assures Public That Robot Tank System Adheres to AI Murder Policy

Quote of Note:

“Last month, the U.S. Army put out a call to private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to “acquire, identify, and engage targets at least 3X faster than the current manual process.” But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason.”

If you want Daleks, this is where you start.  Just an observation.

The U.S. Army wants to increase target engagement by a factor of at least three?  Here’s a guess on how they’ll eventually get-to-bright on that performance point: take humans out of the kill chain.  Full stop.  No carbon-based, slow-reflexed clottage slowing things down … just lean, mean, silicon & metal runnin’ and gunnin’.   Autobots, roll out.

But apparently, that won’t happen because … policy?  “The Department of Defense Directive 3000.09, requires that humans be able to “exercise appropriate levels of human judgment over the use of force,” meaning that the U.S. won’t toss a fully autonomous robot into a battlefield and allow it to decide independently whether to kill someone. This safeguard is sometimes called being “in the loop,” meaning that a human is making the final decision about whether to kill someone.”

Whew.  I feel safer.  You?

Looking down the board … WHY will we ultimately ignore hallowed policy, logical concerns & human morality and make that fateful leap to utilize fully autonomous weaponry? Simple: it’s the same reason that there will be widespread adoption of AI/automation in business – competitive forces. If an enemy army isn’t being held back by an ironclad, impervious-to-all-forces-of-nature-geopolitics-and-markets policy, they just might turn their fully-autonomous weaponry loose on a given battlefield and let ‘er rip. If we have a dog in that particular fight, we will have no choice but to go fully-autonomous, as well … at a minimum to maintain parity [hopefully].

[Wondering out loud, here … after a fully-autonomous weapon (regardless of which “side” it’s fighting on) is used all hot-and-heavy in an engagement: how does it disengage?   What criteria must be met for the algorithm to warrant powering down?  Can it be done manually?  I would think that particular functionality would be very hardened, lest the enemy get their fingers on the power switch.  As such, it would probably be left up to the AI … bringing me back to the first question.]

All that said, I feel totally safe & sound knowing that a policy statement is protecting me and the rest of humankind from a heavily armored, more heavily-armed autonomous horde/swarm of killing machines. 

Looks like Japan and is all-in on regulating autonomous weaponry (so long as the policies don’t hobble development of AI in other areas):

Japan to back int’l efforts to regulate AI-equipped ‘killer robots’  

Quote of Note:

“Japan will say it cannot overlook the development of such autonomous weapons and does not rule out the possibility of working toward a global ban on them, they said.  Some African, European and Latin American countries have already been active in seeking a prohibition of AI-equipped weapons. But global discussions have not yielded a consensus as the United States, Russia and other countries said to be developing them are reluctant.”

Well, sure … for obvious reasons.  But the potential negative outcomes are equally obvious, as well.  Exhibit A:

“In a message to the Group of Governmental Experts, the UN chief said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.  [From UN News, March 25, 2019]

Agreed.  It shouldbe prohibited by international law, for the reasons cited above [at a bare minimum – but I love how he lead with “politically unacceptable” – I would have lead with “morally repugnant”, but that’s just me.].  Bear in mind – there are always actors willing to break “international law”.  If said actor develops autonomous weaponry and deploys it, they have an advantage that adherents to the law do not have. What, then, is an acceptable preparatory countermeasure?

I remember seeing a 2106 World Economic Forum panel discussing “What If Robots Go to War?”  The bit that stuck with me was that the panel attendees were polled and asked [I’m heavily paraphrasing, here] if they would want weaponized AI to defend their country from an invading force as opposed to having their daughters & sons perform that dangerous task.  The answer? Duh … have the AI do it.  Now … the same group was asked – would you prefer to be invaded by flesh-and-blood military, or by weaponized AI.  Well, things flipped a bit … surprise, surprise, they’d rather be attacked by humans than robots.  Double standard, much?  What does this really say about us as humans?

This entire issue is a foggy maelstrom … I’m very interested to hear your thoughts.

Sleep well, everyone … we have policy protecting us. Rock smashes paper, paper covers rock … but does it cover us?

Do you want to Remain Relevant in the Age of Automation?  If so, please have a look at the FastFulcrum courses that provide the substrate skills needed to do so:

https://fastfulcrum.com/all-courses/