ETHICALLY AUTONOMOUS ALGORITHMS
PhD by project, Design Interactions departement, Royal College of Art, 2012-present
Many car manufacturers predict fully autonomous vehicles by 2025. While it is valid to suppose our roads will become safer as autonomous vehicles replace traditional cars, the unpredictability of real-life situations that involve moral and ethical complexities complicate this assumption (i.e. in the event of a crash, should the automated car collide with three adults crossing the road or one child? Only these two outputs are possible). Lethal battlefield robots are another example of such artifacts, which will need an ethical decision-making module due to the tremendous complexity of tasks they have to carry out (ie: choose wether to kill or not a human being), as well as their high degree of autonomy.
How can such a system be designed to accommodate the complexity of ethical and moral reasoning? As ethics have no universal standard, will they become a commoditized feature that one can buy, change, and repurchase depending on personal taste? Or will the ethical frameworks embedded into these products be those of the manufacturer? Most importantly, can we ever suppose that our current subjective ethical notions can be taken for granted, and used for real world products that contain a form of moral reasoning?