U.S. Watchdog Announces Guidelines for Artificial Intelligence

By: Noah Vehafric

The U.S. Government Accountability Office released a report detailing a new framework for accountable use of artificial intelligence by federal agencies. This report sets out  four key principles for government administrators to emphasize: governance, data, performance, and monitoring. 

Source: U.S. Government Accountability Office 

A deeper look into the report gives us a wealth of information for implementing AI. Governance is the first issue addressed. The report says the federal government has a “talent deficit” with AI and that it is the greatest inhibitor to its adoption by agencies. Key principles for ensuring governance is having clear goals, roles, and responsibilities for the AI. Transparency is another important factor for governance. Transparent AI is where individuals can understand how the AI arrives at its decisions. While developed for commercial applications, the FTC recently released guidance about using AI which also stressed the importance of having transparent AI. The report recommends allowing stakeholders access to information regarding the AI’s design, operations and its limitations. 

Data which is critical to the development, assessment, and use of AI is the second principle analyzed by the report. The data that is being used needs to be assessed for accuracy, authenticity, reliability. Important to accomplishing this is careful documentation the data that is being used and how it is being used, the report stresses. Documentation can help allow third parties determine if how the data is being used is appropriate. Any unintended consequences or disparate impacts from AI will be harder to mitigate or prevent if we lack the documentation to examine how those errors arose. 

The report provides guidance for examining the performance of AI. The report urges agencies to test AI systems for biases and inequities. A similar topic that the National Institute for Standards and Technology recently released a report where it proposes a framework for developing trustworthy AI to combat the impact of bias in its use. The report also suggests that an entity using the AI should set a clear level of human involvement where oversight should be expected. 

Because “AI systems are dynamic and adaptive, performance can vary overtime.” Management should be proactive in monitoring AI to ensure it remains aligned with agency objectives. This should be accomplished through continuous monitoring of the AI’s performance and comparing that with expected model drift. 

AI has expanding potential in every industry. As its use becomes more commonplace, having a clear principles to design and use AI will become ever more important. The report is yet another example of the clear and detailed work completed by the GAO. It’s framework establishes important principles that federal agencies can use to safely enhance their operations with AI. 

You can read the full GAO report HERE. 

Photo credit: Ron Cogswell