How to explain AI systems to end users: a systematic literature review and research agenda

How to explain AI systems to end users: a systematic literature review and research agenda

Purpose

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.

Design/methodology/approach

The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.

Findings

The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.

Research limitations/implications

Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.

Originality/value

This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.

Read the full paper here.

Citation: Laato S., Tiainen M., Islam N.A.K.M. and Mäntymäki M. (2022) How to explain AI systems to end users: a systematic literature review and research agenda. Internet research 32: 1-31. doi: 10.1108/INTR-08-2021-0600