Senate Commerce Examines Persuasive Tech

On June 25, the Senate Committee on Commerce, Science and Transportation held a hearing on internet platforms’ use of persuasive technology. 

“The powerful mechanisms behind these platforms meant to enhance engagement also have the ability, or at least the potential, to influence the thoughts and behaviors of literally billions of people,” said Senator John Thune.

Senator Thune also said he is developing legislation that would require internet platforms give consumers the ability to engage without having their experience shaped by algorithms driven by user-specific data. 

Witnesses testified about the techniques tech companies use to engage users. 

Most senators had questions for Maggie Stanphill, a user experience director and leader of the Digital Wellbeing Initiative at Google, on the tech giant’s use of persuasive technology. 

Stanphill denied that the company uses persuasive tech, saying that its “principles are built on transparency, security and control of our users’ data.” 

Senator Brian Schatz seemed doubtful of that claim. 

Senator Cruz asked Stanphill about potential anti-conservative bias at Google, citing a recent report that an executive there wants to prevent “the next Trump situation.” He even asked whether any top Google executives voted for President Trump. 

Stanphill said that Google builds for everyone. And as someone who doesn’t work directly on AI principles, she could not comment on the report. 

The other witnesses shared their thoughts on persuasive technology. 

Tristan Harris, co-founder and executive director at the Center for Human Technology, and Rashida Richardson, director of policy research at the AI Now Institute, emphasized the need for more transparency in how tech companies use persuasive technology. 

Harris believes that there is an invisible asymmetry of power, with tech companies have been masking it as an equal relationship. Richardson said that most of these technologies are “black boxes” and that the government should practice more oversight of them — even if it means that companies need to waive trade secrecy claims. 

Dr. Stephen Wolfram, founder of Wolfram Research, pointed out that humans’ lack of understanding of these AI programs isn’t a problem. 

Instead of controlling how these AI systems work, he recommended that we put external constraints on them. One option is using third-party ranking providers that utilize AI to sort through the information from larger content selection firms. Consumers would choose which provider to use, and as such, would select how their content is presented to them. 

Overall, this issue is extremely complicated and, at times, divisive. But all witnesses agreed that there should be more human supervision over how AI is used.

Author: Bethany Patterson

Photo credit: keithreifsnyder (CC BY 2.0)