The FTC Releases New Guidance Regarding Combatting AI Bias

By: Bridget Visconti

On April 19, the Federal Trade Commission (FTC) released new guidance regarding the commercial use of AI and taking steps to ensure that AI products do not exhibit bias based on gender, race, or other legally protected classes. The FTC lays out recommendations to companies,  in order to help them stay within FTC Act perimeters. 

The FTC issues a stark warning to companies to hold themselves accountable in combating AI bias, or the FTC will do it for them. However, they stress that the product must do more harm than good for their actions to be challenged under the FTC Act in an attempt to clarify the threshold for litigation.

The FTC warns AI developers to watch out for discriminatory outcomes when conducting research on their products and recommends that companies retest their products and algorithms for race, gender, and other biases over time. They suggest that companies should provide transparency and seek help from independent sources to evaluate possible biases within their product that they may not have previously recognized.

The issue with transparency in AI systems and how an AI arrived at the result that it did, can defeat the entire purpose of using AI. If humans can arrive at the same result and explain how they got there from the data set, then was the use of AI even necessary? AI is meant to analyze large data sets and look for correlations or causations that are not easily recognizable by humans.

The mantra holds true: Garbage In; Garbage Out. Companies should take care to be aware of gaps within their data sets that can negatively affect the performance of  various products. All systems should come with a warning that there may be unknown flaws in data sets, some flaws should have been more obvious than others. For example, much hay has been made in the facial recognition industry as external review of data sets has found significant sort comings. As a very simplistic example for what the FTC requires in disclosures, if a company is testing new facial recognition software and all of their test subjects are men, that would present a huge data gap and it must be disclosed to consumers interested in using the software. If a company does not disclose this information, they will be subject to law enforcement actions.

Data set flaws emphasize the importance of external review, but conflicts of interest can arise when dealing with proprietary information, including data sets. However, under this FTC guidance companies may need to become more open to independent data set reviews to avoid the heavy hand of enforcement action.

The FTC also stresses the importance of data privacy transparency with its customers. They instruct companies to explain to users how they will use their data; for example, if their data will become part of a set used for training an AI system. Over the past few years, there have been many instances where companies say users have an option as to how their data is used, when in reality companies still use data however they choose. The tension between privacy and an accurate data set is very real. If too many individuals with similar backgrounds opt in or opt out of data sets, the data becomes flawed.

While AI development has been on the rise for several years now it is still a new field of innovation, and there is so much that innovators, policymakers, and the public don’t know about it. The potential for data gaps and algorithm design flaws that lead to biased results should be evaluated and corrected, especially as AI begins to inhabit more areas of our daily lives.

The FTC has a history of combatting issues of bias and privacy violations through their authority under the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, and they make it clear that they intend on continuing to do so when it comes to AI advancements.

Photo credit: Ian Hutchinson