Author: Henry Rademacher
On Thursday, October 31, the Brookings Institution Center for Technology and Innovation hosted a panel discussion on the development and future of AI. Specifically, the panel addressed “opportunities, risks, and ways to mitigate possible problems with these emerging technologies.” The discussion was moderated by Dr. Darrell M. West, vice president and director of Governance Studies at the Brookings Institution, with panelists from the Brookings Institution and the Information Technology and Innovation Foundation (ITIF).
Panelists included Dr. Nicol Turner Lee, fellow in the Governance Program’s Center for Technology Innovation at the Brookings Institution, Dr. John Villasenor, nonresident senior fellow in Governance Studies and the Center for Technology Innovation at the Brookings Institution, and Dr. Robert D. Atkinson, founder and president of the Information Technology and Innovation Foundation (ITIF)
The event began with a discussion of bias in AI, a problem that has been documented extensively over the past few years. For example,One significant issue is that facial recognition technology has difficulties identifying people of color. The implicit bias in facial recognition technology could have negative effects on communities and individuals of color as the technology is adapted by law enforcement and criminal justice agencies. Dr. Nicol Turner Lee, a fellow at Brookings and a noted expert on bias in AI, proposed that companies involved in AI develop a “ratings system” that would disclose and specify the level of bias in a given AI algorithm. She argues that this would incentivize companies to create better, less discriminatory algorithms.
The discussion then moved to the issue of “deep fakes,” which the Brookings Institution’s Dr. John Villasenor described as “videos that have been constructed to make a person appear to say or do something they never said or did.” As deep fake technology improves, it could theoretically reach a point where well-made deep fakes are indistinguishable from real photos and videos. Such a situation could potentially make public figures, including politicians and celebrities, subject to blackmail and extortion. Like bias in AI, deep fakes are a problem that are difficult for developers to address because the technology is evolving so rapidly. In his research, Dr. Villasenor, who has written extensively on the subject, has suggested a number of measures to address deep fakes, including detection technology, legislative measures, and raising public awareness.
The panel also discussed whether governments or corporations should take the lead on addressing the problems with AI. The panel was generally in agreement that balance would be ideal. Dr. Villasenor stated, “If you turn the dial too far in terms of oppressive oversight, you kill innovation. On the other hand, completely unregulated work in this area could lead to some very significant harm.” Dr. Robert Atkinson, founder and president of ITIF, stressed that AI is still in its early stages and that any regulation should be light enough to not stifle innovation.
The problems with AI, such as bias and deep fakes, are important because AI is being rapidly implemented by corporations and governments. It is imperative that responsible entities continue to monitor the development of AI so that problems can be identified and addressed.
Photo credit: 6eo tech (Flickr)