Log In | Subscribe | | |

If "killer robots" are banned in war, how would anyone know?

Publication: 
Nigel Morris-Co...
chiefofficersnet

In his Newbury and Hobbes series of novels, author George Mann writes fantastical stories about where the Victorians' obsession with developing new technologies might go. They provide a bleak and terrifying future where automatons are available to pretty much anyone with money to spare and a will to kill. There are no benevolent butlers, no automated beauties as Hollywood portrays - only clunky machines with the single purpose of destruction - some with a worrying tendency to act alone once given instructions. Set 100 years ago, they are a parable for what some now want to ban. But the tech is only part of the problem. What about the people?

Sideways lurch into the second most technical arena after aerospace: Formula One motor racing. Often F1 is second only because it adopts aerospace technology as soon as it's developed. F1 has a mindbogglingly complex set of technical regulations. You might think that it's just a car that goes around a track for a couple of hours every other Sunday in (northern hemisphere) summers. Oh, no, Every part is carefully described in the technical regs. Every function is described.

Under the old RAC rules in the UK, there was an over-riding principle: if the rules don't say you can do it, you can't do it.

F1 sets the standards and then leaves interpretation to the teams, until someone argues that someone else is taking an unfair advantage. And then the regulators decide whether to ban the innovation.

The problem arises when teams conceal what they have done. Ferrari and Benetton have often been accused of having self-destructing launch codes set into their cars to help them get off the line fractionally quicker than their rivals. Nothing has ever been proved but Benetton was found to be using an illegal fuel flow valve that increased the speed of fuel delivery during pit-stops by more than 10%, allowing for a significant advantage. It was also found that Benetton had illegal software installed but there was no evidence that it had been used and therefore no penalty was applied.

Some might argue that the rarefied atmosphere of F1 doesn't relate to anything else but i t happens in the real-world, too: VW Group's widespread use of software to falsify emissions data is an example. The code was buried to prevent it being easily found, even if the false emissions data was discovered.

The question over the use of autonomous killing machines is not new: it has been engaging ethics specialists and others for decades but now there is ground-rush. With admirable (sarcasm) talent for headline grabbing, protest groups from 50 nations have formed "The Campaign to Stop Killer Robots." At least it doesn't have a catchy acronym.

The first issue is to define a "robot." A spokeswoman for Human Rights Watch, Mary Wareham, is reported by the BBC as having said "We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy. They are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly and land on their own; robotic sentries that can identify movement. These are precursors to autonomous weapons."

That definition falls at the first hurdle: these are devices not robots in the sense that the public accepts them. Also, "drones" is far too broad a term to be used so casually. In warfare terms, a "drone" has long been a completely dumb thing - often a single use glider or a target pulled behind an aircraft. The current use of the term is exactly the opposite: it is a vehicle (of some kind) that is operated remotely. In this sense, a Scalextric car or a train set is a drone. At a stretch, so is a Hot Wheels car on a track - there is a degree of external control and, even, programming in a non-computing sense.

Are we talking about banning "set and forget" weapons? If so, we must define exactly what technology can be used for that setting. If we don't, or don't do it properly, we must ban bows and arrows and slingshots and all non-direct contact weapons throughout history. Are we to ban on-board adjustment systems where sensors can correct the course? No more Sidewinder, etc. missiles, then. No more vibration or sound-seeking torpedoes, either. Land and sea mines would be out (not that that's a bad thing at all). It means an end to inter-continental ballistic missiles (yes, please). The term "self-destruct" in the context of such things is incorrect: they are destroyed by an on-board mechanism that is activated remotely even though all other functions including navigation are controlled from within the device.

Some are concerned about liability if a machine makes a mistake and kills the wrong target. That's already a reality: who is going to be held liable for the death of pedestrians when self-driving cars make bad decisions resulting in injury or death? We are awaiting the first decision: the first death has already happened.

Also speaking to the BBC, Ryan Gariepy, chief technological officer at Clearpath Robotics made an oft-made point by those who preach common sense in relation to so-called "artificial intelligence. He said "An autonomous system cannot make a decision to kill or not to kill in a vacuum. The de-facto decision has been made thousands of miles away by developers, programmers and scientists who have no conception of the situation the weapon is deployed in." This is the fundamental flaw in all "AI" systems. Someone, somewhere, sometime, has created an instruction set and the machine operates within that instruction set. It can only go within the parameters that are set and apply information (however that is gathered) according to the rules it has been given. Talk of machines "going rogue," like Facebook's software that created its own language, are as a result of programmers failing to issue closed instruction sets: in short, those who designed the system designed in, or failed to design out, failures.

The definition of autonomous killing machines must be simple: for example "a device which, once launched, acquires information from external sources and, by means of data and software within it, carries out the functions of navigation and prosecuting an attack."

This all leaves aside the already-in-use technology of remotely operated killing machines which both gather information which is transmitted back to base where humans analyse it and issue instructions based on their own decision making process. This is in theory far less liable to error than a fully autonomous machine would be. The complaint here is of lack of moral hazard. A person deploying such a device is not placed at any personal physical risk (emotional risk is another question entirely) but he is capable of inflicting physical harm on others. This is, to a degree, a false argument: the trebuchet put its operators in the same position although they did not see, up close and personal, the damage their weapons caused, unlike UAV operators who, via cameras, immediately see the results of their actions.

Any attempt to limit the development of weapons is a good thing but it is intrinsically bound with developments for good things. As I wrote elsewhere some years ago, every development can be used for good or bad purposes. It's impossible to contain technology for good only. Human nature is to try to gain a competitive advantage however it can.

Meanwhile, somewhere in the world, at this very moment, electronic "noses" are sniffing airline passengers and their bags, remote control bomb detectors are tipping over dustbins or approaching abandoned briefcases, UAVs are monitoring busy traffic lanes because criminals steal the SIM cards out of static cameras and I'm sitting in my study in a beautiful Kuala Lumpur afternoon typing this article into a computer which is issuing instructions to a server in London to say "when you receive a "publish" command, put this on the front page. And append the server time, not my KL time. "

If you can see this, it worked, proving the point that remote control is so normal that we don't even think that's what we are doing.

---------------- Advertising ----------------

World NomadsTravel Insurance | | Singapore Airlines

--------------------------------------

 


 

hahagotcha