PhD by project, Design Interactions departement, Royal College of Art, 2012-present
Matthieu Cherubini

Many car manufacturers predict fully autonomous vehicles by 2025. While it is valid to suppose our roads will become safer as autonomous vehicles replace traditional cars, the unpredictability of real-life situations that involve moral and ethical complexities complicate this assumption (i.e. in the event of a crash, should the automated car collide with three adults crossing the road or one child? Only these two outputs are possible). Lethal battlefield robots are another example of such artifacts, which will need an ethical decision-making module due to the tremendous complexity of tasks they have to carry out (ie: choose wether to kill or not a human being), as well as their high degree of autonomy.

How can such a system be designed to accommodate the complexity of ethical and moral reasoning? As ethics have no universal standard, will they become a commoditized feature that one can buy, change, and repurchase depending on personal taste? Or will the ethical frameworks embedded into these products be those of the manufacturer? Most importantly, can we ever suppose that our current subjective ethical notions can be taken for granted, and used for real world products that contain a form of moral reasoning?
Ethical Things, 2015 (with Simone Rebaudengo)
Open Source Ethics for Autonomous Surgical Robots, 2014
Ethical Autonomous Vehicles, 2014
Lift China, Shanghai, September 2014
Tasmeem Doha - The House That Knew too Much, Doha, March 2015 (with Simone Rebaudengo)
Blog (2012 - present)
Databots Timeline (not updated since 2012...and not finished)
Algorithms in movies