Who's responsible when an AI kills someone?
With the recent news that an autonomous Uber vehicle killed a woman crossing the street in Tempe, Arizona, this ethical question is very timely. Here is an element of response published in the MIT Technology Review:
Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems.
The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.
The whole article is interesting as it delves deeper in all the possible scenarios.