The House Subcommittee on Communications and Technology, along with the Subcommittee on Digital Commerce and Consumer Protection, held a hearing on algorithms and their impact on consumers. As citizens' data is increasingly used by companies to improve their services, lawmakers want to have a better idea of what and how this information is used. This unfamiliarity, however, isn’t stopping some from looking to scratch that regulatory itch.
Throughout the hearing, there was bipartisan concern for data protection and the blocking of content online and the First Amendment issues relating to that. Though the hearing was on the impact of algorithms, members of both parties brought up the impact in relation to the FCC’s upcoming vote on the Restoring Internet Freedom Order.
Republican members were quick to note that malevolent effects involving these issues could not be abated under the current classification of Internet service providers as a Title II public utility, nor after their upcoming reclassification to a Title I information service. Instead, Republican members asked if there were widespread, systematic issues involving the current online ecosystem and input on specific ways to address those specific problems, rather than expanding Title II’s utility-style regulation promoted by Democratic members.
In asking about systematic issues in the current system, Republicans noted the importance of not jumping to regulatory action too quickly. They focused on issues at hand, to which Dr. Omri Ben-Sahar, a law professor at the University of Chicago Law School, presented his findings on the ineffectiveness of mandatory disclosure policies, which are prevalent online.
From iTunes agreements to financial disclosures, Dr. Ben-Sahar explained that most people do not read or do not understand these overused disclosure policies, by which most companies gain consent from consumers to use certain kinds of data. These disclosures are ineffective, he said, because they are too simplistic and vague in some instances and too complex and technical in others, leaving out both well-informed and under-informed consumers.
This finding highlighted the work of another panelist from the hearing: Dr. Catherine Tucker. As the Sloane Distinguished Professor of Management Science and Professor of Marketing at the MIT Sloane School of Management, she has documented the discrepancy between the high value consumers say they put on their privacy, and the actual actions consumers actually take to give it away.
Dr. Tucker explained how in one experiment, even those who said they highly value their privacy were willing to exchange personal data in return for free pizza. Often when faced with the decision to exchange personal data for a good or service, people are willing to make the exchange; however, when simply asked about making the decision many say they have a higher value on privacy of personal data than they exercise.
According to Dr. Tucker, consumers lack understanding about their data, and encouraged the firmer establishment of individual property rights relating to data to help consumers combat this disparity. That does not mean consumers are unwilling to share data, just that clearer distinctions can be helpful for consumers.
Even so, experts don't fully know the effect of latent inferences drawn from seemingly innocuous consumer information, according to the work of Dr. Michael Kearns, Professor and National Center Chair of the Department of Computer and Information Science at the University of Pennsylvania. This means that even if consumers are willing to disclose harmless information to companies, machine learning practices can use and interpret that information to extrapolate more personal information that was not provided.
If experts in this field lack understanding of these processes, why would we give policymakers and bureaucrats the ability to preemptively regulate them?
To protect consumers from harm? But what harm exactly would regulation shield consumers from? Seeing too many shoe ads? Giving them too many deals at websites they’ve shown a predisposition towards?
Dr. Ben-Sahar implored the committee members to consider one specific question: what is the actual consumer harm? When government tries to correct a problem, it is important there is an actual problem to fix and that it understand the problem’s magnitude.
In cases where bias and discrimination seem prevalent, there may be other considerations – like the economics of online advertising – that have precedence over these areas, said Dr. Tucker. While there are certainly issues with AI and machine learning relating to racism, these are not fixable with some blanket regulatory action.
Further, regulations already in existence are probably applicable to any perceived harms. Strong ex post regulation to combat systematic harms in the marketplace is a more effective manner for government to both curb injuries to consumers and preserve an environment conducive to permissionless innovation than the preemptive stranglehold of ex ante regulation. It just requires government do the less-than-sexy legwork of going after actual criminals, rather than licensing and paperworking everyone in the market to death in order to preemptively go after bad actors.
Ms. Laura Moy, Deputy Director of the Georgetown Law Center on Privacy and Technology, noted this concern in relation to the Federal Trade Commission, but the FTC, as the premier consumer protection agency, does have strong enforcement capabilities in the online marketplace.
Dr. Ben-Sahar noted the “grand bargain” online consumers currently have in place, where they pay for free services like Facebook and Twitter, not with money, but the inconspicuous data they provide these platforms. Upending that bargain to fix theoretical problems via ex ante regulation could undermine the foundation of the internet marketplace – a marketplace that has been and can be instrumental in alleviating social ills.
When something is new or different, people can be quick to ostracize and go after those differences simply because they don’t understand them. Taking the time to both learn about and diagnose specific problems is an important step to take when addressing a problem, but if even computer experts aren’t sure how parts of their industry are developing, how can 535 professional lawmakers (some of which pronounce algorithms “al-GO-rithms”) know what to do?