Matthias Englert , Sandra Siebert , and Martin Ziegler have posted Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons .
Here is the abstract:
Lethal Autonomous Weapons promise to revolutionize warfare — and raise a multitude of ethical and legal questions. It has thus been suggested to program values and principles of conduct (such as the Geneva Conventions) into the machines’ control, thereby rendering them both physically and morally superior to human combatants.
We employ mathematical logic and theoretical computer science to explore fundamental limitations to the moral behaviour of intelligent machines in a series of “Gedankenexperiments”: Refining and sharpening variants of the Trolley Problem leads us to construct an (admittedly artificial but) fully deterministic situation where a robot is presented with two choices: one morally clearly preferable over the other — yet, based on the undecidability of the Halting problem, it provably cannot decide algorithmically which one. Our considerations have surprising implications to the question of responsibility and liability for an autonomous system’s actions and lead to specific technical recommendations.
Mark Gibbs has a recent post at NetworkWorld describing the paper: Forget your robot overlords: Watch out for Lethal Autonomous Systems that make mistakes.
Filed under: Applications, Articles and papers, Policy debates, Technology developments, Technology tools Tagged: Algorithmic law, Artificial intelligence and law, ArXiv, Drone law, Drones, Halting problem, Intelligent agents' legal decision making, Intelligent agents' legal decisionmaking, Law of robots, Law of war information systems, Laws of war information systems, Legal algorithms, Martin Ziegler, Matthias Englert, Modeling legal decision making, Modeling legal decisionmaking, Robot law, Sandra Siebert, Trolley Problem
via Legal Informatics Blog http://ift.tt/1uA4EjS
Niciun comentariu:
Trimiteți un comentariu