EU to debate robot legal rights, mandatory "kill switches"

As robots develop cognitive abilities, the question of legal responsibility becomes an urgent one to address

As robots develop cognitive abilities, the question of legal responsibility becomes an urgent one to address(Credit: RType/Depositphotos)

A draft report submitted to the European Parliament's legal affairs committee has recommended that robots be equipped with a "kill switch" in order to manage the potential dangers in the evolving field of self-learning autonomous robotics.

The broad-ranging report, recently approved by the legal affairs committee, contains a variety of proposals designed to address possible legal and ethical issues that could arise through the development of autonomous artificial intelligences. These include the establishment of a European Agency for robotics and AI, plus a call for discussing the implementation of a universal basic income as a strategy to address the possible mass unemployment that could result from robotics replacing large portions of the workforce.

In a supreme case of life imitating art, the report opens by referencing Mary Shelley's Frankenstein and later suggests Issac Asimov's Three Laws of Robotics as a general principle that designers and producers of robotics should abide by.

Issues of identifying legal liability in regards to the potential harmful actions of robots are prominently discussed in the report. As robots develop cognitive abilities that give them the ability to learn from experience and make independent decisions, the question of legal responsibility becomes an urgent one to address. The report asks how a robot could be held responsible for its actions, and at what point that responsibility falls on either the manufacturer, owner or user.

Interestingly, a proportionate scale of responsibility is proposed that takes into account the capacity of a robot's self-learning abilities. The report states,

"the greater a robot's learning capability or autonomy is, the lower other parties' responsibility should be, and the longer a robot's 'education' has lasted, the greater the responsibility of its 'teacher' should be."

One proposal the report raises to manage the legal responsibility of autonomous robots is to introduce a compulsory insurance scheme, similar to that of car insurance, whereby producers or owners of robots are required to take out cover for potential damage caused by their robots.

Robots as "electronic persons"

The report goes so far as to question whether a new legal category of "electronic persons" needs to be created in the same way the notion of corporate personhood was developed to give corporations some of the same legal rights as that of a natural person. Of course, the idea of giving robots any form of legal rights akin to that of a person has been hotly debated for years.

Balancing the idea of granting a robot some form of legal rights with the proposal of a "kill switch" also raises some problematic contradictions.

The idea of mandating manufacturers implement a form of "kill switch" into their designs is not new. In 2016 researchers at Google DeepMind proposed what they called a "big red button" that would prevent an AI from embarking on, or continuing, a harmful sequence of actions. The paper Google released discussed the problems with implementing such a form of kill switch into a machine with self-learning capabilities. After all, the AI may learn to recognize the actions that its human controller is trying to subvert and either avoid undertaking similar tasks causing it to become dysfunctional or, in a worst-case scenario, learn to disable its own "big red button."

The Google DeepMind researchers suggested that any robot programmed with a kill switch would also need to be programmed with a form of selective amnesia that causes it to forget that it had ever been interrupted or usurped. This would stop the robot gaining awareness of its lack of autonomy.

Ironically, the legal implications of implementing a kill switch would seem to then refocus a legal liability back onto the robot's owner, for if a robot undertook a harmful action and the kill switch was not activated, then its foreseeable that the owner could be deemed liable for negligence.

It's incredibly clear that the questions raised by this EU report are a nightmare of "what ifs" and grey areas, but they certainly are ones that governments and regulatory bodies will need to grapple with sooner rather than later. The full house of the European Commission will debate and vote on the proposals in this wide-ranging report in February and its decisions could ultimately set the foundation for how we legally approach AI research and regulation for many years to come.