While there is widespread consensus that artificial intelligence (AI) needs to be governed owing to its rapid diffusion and societal implications, the current scholarly discussion on AI governance is dispersed across numerous disciplines and problem domains. This paper clarifies the situation by discerning two problem areas, metaphorically titled the “easy” and “hard” problems of AI governance, using a dialectic theory synthesis approach. The “easy problem” of AI governance concerns how organizations’ design, development, and use of AI systems align with laws, values, and norms stemming from legislation, ethics guidelines, and the surrounding society. Organizations can provisionally solve the “easy problem” by implementing appropriate organizational mechanisms to govern data, algorithms, and algorithmic systems. The “hard problem” of AI governance concerns AI as a general-purpose technology that transforms organizations and societies. Rather than a matter to be resolved, the “hard problem” is a sensemaking process regarding socio-technical change. Partial solutions to the “hard problem” may open unforeseen issues. While societies should not lose track of the “hard problem” of AI governance, there is significant value in solving the “easy problem” for two reasons. First, the “easy problem” can be provisionally solved by tackling bias, harm, and transparency issues. Second, solving the “easy problem” helps solve the “hard problem,” as responsible organizational AI practices create virtuous rather than vicious cycles.
Read the full paper here.
Citation: Minkkinen M., Mäntymäki M., (2023) Discerning Between the “easy” and “hard” Problems of AI Governance