Skip to main content

Friendly robot smiling

We need to create specific accountability guidelines to ensure that the use of Artificial Intelligence (AI) robots remains ethical, according to new research led by Zsofia Toth, Associate Professor in Marketing and Management at the Business School.


The research sets out a new framework for ensuring organisations that employ AI robots have accountability guidelines in place. AI robots are increasingly used to facilitate human activity in many industries, for instance, healthcare, educational settings, mobility and the military, but must have accountability for their actions.

Developing the framework

To develop the framework, Zsofia Toth, alongside her colleagues’, Professors Robert Caruana, Thorsten Gruber and Claudia Loebbecke, reviewed the uses of AI robots in different professional settings from an ethical perspective.

From the research four clusters of accountability were developed. These four clusters revolve around the ethical categories outlined in normal business ethics:

  • Supererogatory - actions represent a positive extra mile from what is expected morally
  • Illegal - any action that is against the law and regulations
  • Immoral - any action that only reaches the legal threshold’s bare minimum
  • Permissible - actions not requiring explanations of putative fairness or appropriateness.

There are also two suggested themes for ethical evaluations from the project:

  • The locus of morality – the level of autonomy to choose an ethical course of action.
  • Moral intensity – the potential consequences of the use of AI robots.

Humans can set boundaries in what AI robots can and should learn and unlearn (for instance, to decrease or eliminate racial/gender bias) and the type of decisions they can make without human involvement (for instance, in case of a self-driving car in an emergency setting).

The four clusters of accountability

  1. ‘professional norms’ - AI robots are used for small, remedial, everyday tasks like heating or cleaning, robot design experts and customers take most responsibility for the appropriate use of the AI robots. 
  2. ‘business responsibility’ - AI robots are used for difficult but basic tasks, such as mining or agriculture – a wider group of organisations bear the brunt of responsibility for AI robots.
  3. ‘inter-institutional normativity’ - AI robots may make decisions with potential major consequences, such as healthcare management and crime-fighting – governmental and regulatory bodies should be increasingly involved in agreeing specific guidelines.
  4. ‘supra-territorial regulations’ – AI robots are used on a global level, such as in the military, or driverless cars – a wide range of governmental bodies, regulators, firms and experts hold accountability. This comes with the high dispersal of accountability. This, however, does not imply that the AI robots ‘usurp’ the role of ethical human decision-making, but it becomes increasingly complex to attribute the outcomes of AI robots’ use to specific individuals or organisations and thus these cases deserve special attention.

Research Impact

The researchers hope that their new framework offers insights and an approach for policy makers and governments to place accountability on the actions of AI robots. Previously, the accountability of these actions was a grey area, but a framework like this should help to reduce the number of ethically problematic cases of AI robots’ use.

More Information