In the coming decades, autonomous vehicles are expected to revolutionize the transport of people and goods, improving safety, reducing emissions, increasing passenger comfort and increasing the possibilities of transporting people with reduced mobility. But the massive adoption of these vehicles by society also poses a new series of challenges, not only technological but also ethical. Mathematics, especially the branch of risk analysis, allows us to face these moral debates in a transparent way, providing a framework to support ethical decision-making in autonomous vehicles.
To be honest, we are still a long way from being able to routinely hail a driverless taxi and chat with their system of infotainment. At the moment, the most promising autonomous vehicles (ADS) in the medium term are the so-called levels three and four in which, although driving is autonomous, human intervention may be required for driving under certain circumstances . But, when should intervention be requested? If the vehicle predicts a dangerous situation, the driver is expected to be alert and ready to regain control; however, if you are distracted and the car transfers control, the consequences could be catastrophic. What should the vehicle do? And who would be responsible for an eventual catastrophe? This issue, which is known as the fundamental dilemma of level three and four ADS, has recently been raised within the framework of the European Trustonomy project, which seeks to increase the confidence of the general public in these technologies and, in particular, to address various ethical debates around them.
Trying to answer ethical questions such as the above fundamental dilemma was one of the objectives of the Moral Machine project, which confronted millions of individuals with these types of situations to obtain information on collective ethical priorities in different cultures. Among its conclusions, the marked difference in response between collectivist cultures (such as China or Japan) and individualistic cultures (such as France or the United States) stands out.
Therefore, there is no universal ethical consensus when it comes to emergency decisions in autonomous driving and it is not possible to give one-size-fits-all answers to such dilemmas. However, the different ethical systems can be modeled through mathematics, to consistently guide the decision-making of an ADS. As stated by the emeritus professor at Duke University, Ralph Keeney, any ethical current can be modeled with the help of risk analysis, from the deontological to the consequentialist, including both utilitarian and self-protective approaches.
Recently, within the Trustonomy project, a new model for decision-making in ADS has been developed that takes into account multiple objectives: vehicle performance, passenger comfort, trip duration, safety – that of passengers, people in the driving scene, the vehicle itself and the infrastructure – and even the manufacturer’s reputation. Once the objectives have been defined, each producer could decide to weight them in a different way, giving more importance to those that are of interest to them; For example, greater emphasis could be given to passenger safety, at the cost of less pedestrian safety. For this, weights are used that allow combining the different objectives in a single function, called a multi-attribute utility function. This function would govern the operation of the vehicle.
The proposed model manages to standardize and make the decision-making process in the ADSs transparent. This allows, among other things, to reproduce this decision making in simulated environments. As a result, competent regulators could simulate multiple driving scenes and set standards for use. Furthermore, in the event that an ADS suffers an accident, its operation could be simulated using the utility function that guided the vehicle’s decisions when the accident occurred. These simulations would make it possible to assess whether the vehicle met current regulations and, if not, to set responsibilities.
Determining the decision-making of an ADS is a very complex problem that refers not only to emergency situations. This complexity is well reflected in the recent report of the European Commission Ethics of connected and automated vehicles where recommendations are identified in relation to safety and risk on the roads, the ethical aspects of data and algorithms and the question of responsibility. Undoubtedly, this raises myriad technological challenges, in which mathematics will play a central role.
David Rios He is a research professor at the Higher Council for Scientific Research (CSIC) and AXA-ICMAT Chair at ICMAT and a permanent member of the Royal Academy of Exact, Physical and Natural Sciences of Spain.
Roi Naveiro He is a postdoctoral researcher at the CSIC at the ICMAT
Coffee and theorems is a section dedicated to mathematics and the environment in which it is created, coordinated by the Institute of Mathematical Sciences (ICMAT), in which researchers and members of the center describe the latest advances in this discipline, share meeting points between the mathematics and other social and cultural expressions and remember those who marked its development and knew how to transform coffee into theorems. The name evokes the definition of the Hungarian mathematician Alfred Rényi: “A mathematician is a machine that transforms coffee into theorems.”
Editing and coordination: Agate A. Rudder G Longoria (ICMAT).
You can follow MATTER on Facebook, Twitter e Instagram, or sign up here to receive our weekly newsletter.
George Holan is chief editor at Plainsmen Post and has articles published in many notable publications in the last decade.