We need to create specific accountability guidelines to ensure that the use of Artificial Intelligence (AI) robots remains ethical, according to new research led by Zsofia Toth, Associate Professor in Marketing and Management at the Business School.
The research sets out a new framework for ensuring organisations that employ AI robots have accountability guidelines in place. AI robots are increasingly used to facilitate human activity in many industries, for instance, healthcare, educational settings, mobility and the military, but must have accountability for their actions.
To develop the framework, Zsofia Toth, alongside her colleagues’, Professors Robert Caruana, Thorsten Gruber and Claudia Loebbecke, reviewed the uses of AI robots in different professional settings from an ethical perspective.
From the research four clusters of accountability were developed. These four clusters revolve around the ethical categories outlined in normal business ethics:
There are also two suggested themes for ethical evaluations from the project:
Humans can set boundaries in what AI robots can and should learn and unlearn (for instance, to decrease or eliminate racial/gender bias) and the type of decisions they can make without human involvement (for instance, in case of a self-driving car in an emergency setting).
The researchers hope that their new framework offers insights and an approach for policy makers and governments to place accountability on the actions of AI robots. Previously, the accountability of these actions was a grey area, but a framework like this should help to reduce the number of ethically problematic cases of AI robots’ use.