Ethics-based AI auditing represents a nascent yet fragmented research domain that seeks to bridge the gap between abstract ethical ideals and actionable practices. This paper synthesizes the literature on ethics-based AI auditing by identifying key themes, knowledge gaps, and future research directions. Employing a systematic literature review (SLR) methodology guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), the authors examine the conceptualization of ethical principles in AI auditing and the knowledge contributions these audits provide to stakeholders. The review confirms that ethics-based AI auditing is an emerging area of study, characterized by diverse and often inconsistent interpretations of core ethical principles.The findings reveal two primary themes in the literature: the articulation of ethical principles—such as fairness, transparency, non-maleficence, and autonomy—and the identification of key stakeholders, including developers, auditors, regulators, users, and the wider public. Despite these contributions, the review highlights significant gaps, including the lack of standardized methodologies for implementing ethical principles, the challenges of auditing opaque AI systems, and insufficient alignment of stakeholder perspectives.
Read the full paper here.
Citation: Laine J., Minkkinen M., Mäntymäki M. (2024) Ethics-based AI auditing: A systematic literature review on conceptualizations of ethical principles and knowledge contributions to stakeholders. Information & Management, Vol. 61 (5).