Table of Contents
- Executive Summary
- Introduction
- A Tale of Two Algorithms
- Russian Interference, Radicalization, and Dishonest Ads: What Makes Them So Powerful?
- Algorithmic Transparency: Peeking Into the Black Box
- Who Gets Targeted鈥擮r Excluded鈥擝y Ad Systems?
- When Ad Targeting Meets the 2020 Election
- Regulatory Challenges: A Free Speech Problem鈥攁nd a Tech Problem
- So What Should Companies Do?
- Key Transparency Recommendations for Content Shaping and Moderation
- Conclusion
So What Should Companies Do?
What would a better framework, one that puts democracy, civil liberties, and human rights above corporate profits, look like?
It is worth noting that Section 230 stipulates that companies are protected from liability in taking 鈥渁ny action voluntarily taken in good faith鈥 to enforce its rules. Section 230 doesn鈥檛 provide any guidance as to what 鈥済ood faith鈥 actually means for how companies should govern their platforms.1 Some legal scholars have proposed reforms that would keep Section 230 in place but would clarify steps that companies need to take in order to demonstrate good-faith efforts to mitigate harm, in order to remain exempt from liability.2
Companies hoping to convince lawmakers not to abolish or drastically change Section 230 would be well advised to proactively and voluntarily implement a number of policies and practices to increase transparency and accountability. This would help to mitigate real harms that users or communities can experience when social media is used by malicious or powerful actors to violate their rights.
First, companies鈥 speech rules must be clearly explained and consistent with established human rights standards for freedom of expression. Second, these rules must be enforced fairly according to a transparent process. Third, people whose speech is restricted must have an opportunity to appeal. And finally, the company must regularly publish transparency reports with detailed information about the steps that the company takes to enforce its rules.3
Since 2015, RDR has encouraged internet and telecommunications companies to publish basic disclosures about their policies and practices that affect their users鈥 rights. Our annual benchmarks major global companies against each other and against standards grounded in international human rights law.
Much of our work measures companies鈥 transparency about the policies and processes that shape users鈥 experiences on their platforms. We have found that鈥攁bsent a regulatory agency empowered to verify that companies are conducting due diligence and acting on it鈥攖ransparency is the best accountability tool at our disposal. Once companies are on the record describing their policies and practices, journalists and researchers can investigate whether they are actually telling the truth.
Transparency allows journalists, researchers, and the public and their elected representatives to make better informed decisions about the content they receive and to hold companies accountable.
We believe that platforms have the responsibility to set and enforce the ground rules for user-generated and ad content on their services. These rules should be grounded in international human rights law, which provides a framework for balancing the competing rights and interests of the various parties involved.4 Operating in a manner consistent with international human rights law will also strengthen the U.S.鈥 long-standing bipartisan policy of promoting a free and open global internet.
But again, content is only part of the equation. Companies must take steps to publicly disclose the different technological systems at play: the content-shaping algorithms that determine what user-generated content users see, and the ad-targeting systems that determine who can pay to influence them. Specifically, companies should explain the purpose of their content-shaping algorithms and the variables that influence them so that users can understand the forces that cause certain kinds of content to proliferate, and other kinds to disappear.5 Currently, companies are not transparent or accountable for how their targeted-advertising policies and practices and their use of automation shape the online public sphere by determining the content and information that internet users receive.6
Companies also need to publish their rules for ad targeting, and be held accountable for enforcing those rules. Our research shows that while Facebook, Google, and Twitter all publish ad-targeting rules that list broad audience categories that advertisers are prohibited from using, the categories themselves can be excessively vague and unclear鈥擳witter for instance bans advertisers from using audience categories 鈥渢hat we consider sensitive or are prohibited by law, such as race, religion, politics, sex life, or health."7 Nor do these platforms disclose any data about the number of ads they removed for violating their ad-targeting rules (or other actions they took).8
Facebook says that advertisers can target ads to custom audiences, but prohibits them from using targeting options 鈥渢o discriminate against, harass, provoke, or disparage users or to engage in predatory advertising practices.鈥 However, not everyone can see what these custom audience options are, since these are only available to Facebook users. And Facebook publishes no data about the number of ads removed for breaching ad-targeting rules.
Platforms should set and publish rules for targeting parameters, which should apply equally to all ads鈥攁 practice like this would make it much more difficult for companies to violate anti-discrimination laws like the Fair Housing Act. Moreover, once an advertiser has chosen their targeting parameters, companies should refrain from further optimizing ads for distribution, as this may lead to further discrimination.9
Platforms should not differentiate between commercial, political, and issue ads, for the simple reason that drawing such lines fairly, consistently, and at a global scale is impossible and complicates the issue of targeting.
Eliminating targeting practices that exploit individual internet users鈥 characteristics (real or assumed) would protect privacy, reduce filter bubbles, and make it harder for political advertisers to send different messages to different constituent groups.
Limiting targeting, as Federal Elections Commissioner Ellen Weintraub has argued,10 is a much better approach, though here again, the same rules should apply for all types of ads. Eliminating targeting practices that exploit individual internet users鈥 characteristics (real or assumed) would protect privacy, reduce filter bubbles, and make it harder for political advertisers to send different messages to different constituent groups. This is the kind of reform that will be addressed in the second part of this report series.
In addition, companies should conduct due diligence through human rights impact assessments on all aspects of what their rules are, how they are enforced and what steps the company takes to prevent violations of users鈥 rights. This process forces companies to anticipate worst case scenarios, and change their plans accordingly, rather than simply rolling out new products or entering new markets and hoping for the best.11 A robust practice like this could reduce or eliminate some of the phenomena described above, ranging from the proliferation of election-related disinformation to YouTube鈥檚 tendency to recommend extreme content to unsuspecting users.
All systems are prone to error, and content moderation processes are no exception. Platform users should have access to timely and fair appeals processes to contest a platform鈥檚 decision to remove or restrict their content. While the details of individual enforcement actions should be kept private, transparency reporting provides essential insight into how the company is addressing the challenges of the day. Facebook, Google, Microsoft, and Twitter have finally started to do so,12 though their disclosures could be much more specific and comprehensive.13 Notably, they should include data about the enforcement of ad content and targeting rules.
Our complete transparency and accountability standards can be found on our website. Key transparency recommendations for content shaping and content moderation are presented in the next section.
Citations
- Kosseff, Jeff. 2019. The Twenty-Six Words That Created the Internet. Ithaca: Cornell University Press.
- Citron, Danielle Keats, and Benjamin Wittes. 2017. 鈥淭he Internet Will Not Break: Denying Bad Samaritans 搂230 Immunity.鈥 Fordham Law Review 86(2): 401鈥23.
- See also P铆rkov谩, Eli拧ka, and Javier Pallero. 2020. 26 Recommendations on Content Governance: A Guide for Lawmakers, Regulators, and Company Policy Makers. Access Now.
- Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports.
- Singh, Spandana. 2019. Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content. Washington D.C.: 麻豆果冻传媒鈥檚 Open Technology Institute. source; Singh, Spandana. 2019. Rising Through the Ranks: How Algorithms Rank and Curate Content in Search Results and on News Feeds. Washington D.C.: 麻豆果冻传媒鈥檚 Open Technology Institute. source
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington D.C.: 麻豆果冻传媒.
- Twitter. 2020. 鈥淧rivacy Policy.鈥 (Accessed on February 20, 2020).
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington D.C.: 麻豆果冻传媒.
- Ali, Muhammad et al. 2019. 鈥淒iscrimination through Optimization: How Facebook鈥檚 Ad Delivery Can Lead to Biased Outcomes.鈥 Proceedings of the ACM on Human-Computer Interaction 3(CSCW): 1鈥30.
- Weintraub, Ellen L. 2019. 鈥淒on鈥檛 Abolish Political Ads on Social Media. Stop Microtargeting.鈥 Washington Post.
- Allison-Hope, Dunstan. 2020. 鈥淗uman Rights Assessments in the Decisive Decade: Applying UNGPs in the Technology Sector.鈥 Business for Social Responsibility.
- Ranking Digital Rights. 2019. Corporate Accountability Index. Washington, DC: 麻豆果冻传媒.
- In particular, Microsoft only reports requests from individuals to remove nonconsensual pornography, also referred to as 鈥渞evenge porn,鈥 which is the sharing of nude or sexually explicit photos or videos online without an individual鈥檚 consent. See 鈥淐ontent Removal Requests Report 鈥 Microsoft Corporate Social Responsibility.鈥 Microsoft.