Researchers at MIT Release a Repository of AI Risks

When considering the use of an AI system or crafting regulations to govern its application, determining the specific risks involved is no simple task. For AI systems controlling critical infrastructure, the risks to human safety are clear. However, AI systems designed for tasks like scoring exams, sorting resumes, or verifying travel documents at immigration control each present distinct and equally serious risks.

In developing laws to regulate AI, such as the EU AI Act or California’s SB 1047, policymakers have faced challenges in reaching a consensus on which risks should be addressed. To provide guidance for lawmakers, industry stakeholders, and academics, researchers at MIT have created what they call an AI “risk repository.” This repository is a database that categorizes and analyzes AI risks, making them publicly accessible and available for others to use.

Peter Slattery, a researcher at MIT’s FutureTech group and the lead on the AI risk repository project, explains that the repository is designed to be a comprehensive and up-to-date resource for understanding AI risks. The database includes over 700 AI risks, organized by factors such as intentionality, domains like discrimination, and subdomains including disinformation and cyberattacks. The need for this repository arose from the realization that existing risk frameworks cover only a portion of the identified risks, leading to potential gaps in AI development, usage, and policymaking.

Slattery notes that despite the assumption that there is a consensus on AI risks, their findings suggest otherwise. On average, existing frameworks cover only 34% of the 23 identified risk subdomains, with nearly a quarter addressing less than 20%. No single document or framework covered all 23 subdomains, and the most comprehensive one only included 70%. This fragmentation in the literature highlights the need for a more unified approach to understanding AI risks.

To create the repository, MIT researchers collaborated with colleagues from the University of Queensland, the nonprofit Future of Life Institute, KU Leuven, and the AI startup Harmony Intelligence. They reviewed thousands of academic documents related to AI risk evaluations and found that certain risks are mentioned more frequently in existing frameworks. For instance, over 70% of the frameworks addressed the privacy and security implications of AI, while only 44% discussed misinformation. Similarly, more than 50% covered the risks of discrimination and misrepresentation, but only 12% mentioned the “pollution of the information ecosystem,” such as the rise of AI-generated spam.

Slattery suggests that the repository could serve as a foundation for researchers, policymakers, and others working with AI risks. Before this resource, individuals had to choose between investing significant time in reviewing scattered literature or relying on limited frameworks that might overlook important risks. The repository aims to save time and improve oversight by providing a more comprehensive database.

However, questions remain about the repository’s impact. Given the current patchwork of AI regulations worldwide, it’s uncertain whether the existence of such a repository could have influenced past regulatory efforts. Additionally, simply agreeing on the risks posed by AI might not be enough to ensure effective regulation, as many safety evaluations for AI systems have inherent limitations.

Despite these challenges, the MIT researchers plan to use the repository in their next phase of research to evaluate how well different AI risks are being addressed. Neil Thompson, head of the FutureTech lab, explains that they will use the repository to identify gaps in organizational responses to AI risks, ensuring that no significant risks are overlooked.

Latest articles