Company-X director David Hallett explores robots’ rights in the workplace.
Forget human resources, electronic person resources could soon become a thing as we enter a new industrial revolution led by robots.
The dictionary defines robot as “a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer”.
Robots, bolted to the ground, have been working on production lines for decades. More recently robots, equipped with sensors and cleaning devices, have been making businesses and homes better places as they autonomously go about their jobs.
But it won’t stop there, and the European Union (EU) knows it.
The EU is considering establishing a robotics law agency to grant special legal status to any artificial intelligence defined as an “electronic person” after adopting a report from the Progressive Alliance of Socialists and Democrats member Mady Delvaux.
Delvaux says, in the report, that artificial intelligences are about to “unleash a new industrial revolution, which is likely to leave no stratum of society untouched”.
The report, with recommendations to the Commission on Civil Law Rules on Robotics, recently went before the Committee on Legal Affairs and sparked international debate on how robots, who have the potential to outwit humanity in just a few generations, will work alongside their flesh and blood counterparts.
The world’s media had a field day over the issue, but the debate is far from new.
Science fiction author Isaac Asimov, who began writing about robots during the outbreak of the Second World War, recognised the problem and solved it when he postulated The Three Laws of Robotics in his fictional Handbook of Robotics, purportedly published in 2058AD. If the EU gets its way humanity won’t have to wait another 41 years to formulate laws, such as Asimov’s, around robots.
“A robot may not injure a human being or, through inaction, allow a human being to come to harm,” Asimov’s first law states.
“A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
“A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
Do Androids Dream of Electric Sheep?, asked Philip K Dick in his 1968 novel, which film director Ridley Scott and actor Harrison Ford answered in their 1982 adaptation of Dick’s story Blade Runner.
But today’s debate, instigated by the EU report, is closer to that posed by writer Melinda M. Snodgrass’s in the Star Trek: The Next Generation episode The Measure of a Man, in 1989. It’s the one where the USS Enterprise’s android office Data is ruled to be alive by a Starfleet court, giving him the right to refuse to be dismantled by a curious robotics expert who wants to build more.
As the EU gains momentum in this area it will become clear as to who is responsible for an injury caused by a robot, the machine itself or the programmer. Not an issue? Just think about a fatal road traffic accident caused by a self-driving car.
When we move past the hysteria we realise this isn’t fantasy, it’s a reality that will impact us all sooner or later.