Bipartisan Policy Center
What are impact assessments?
An impact assessment is a risk management tool that seeks to ensure an organization has sufficiently considered a system’s relative benefits and costs before implementation. In the context of AI, an impact assessment helps to answer a simple question: alongside this system’s intended use, for whom could it fail?
An organization might use an impact assessment to understand how an AI system will impact users and society more broadly at every stage: data-gathering, development, deployment, and continuous monitoring in society. These assessments can serve as critical guard rails for public and private organizations, with the investment required proportionate to risk. For instance, AI systems that offer movie recommendations would receive less scrutiny than those that make hiring or loan decisions. An impact assessment promotes accountability by requiring an organization to document its decision-making process and “show its work.”
During BPC’s work in 2020 on an AI National Strategy for Congress, there was little consensus among experts on the baseline language, scope, and design of a potential assessment. As we learn how to identify and measure harm from AI systems, impact assessments can make the inner workings of the algorithms that power these systems more transparent. The goal is for impact assessments to promote accountability through documentation and knowledge production, rather than to instill fear of liability. This process seeks to create a regulatory incentive for organizations to answer basic questions about decision-making: why was a given regression criterion or training data set chosen and why were alternatives rejected?