麻豆果冻传媒

In Short

Pressing Facebook for More Transparency and Accountability Around Content Moderation

Facebook
Shutterstock

On Tuesday, 麻豆果冻传媒鈥檚 Open Technology Institute (OTI) and 79 organizations from around the world signed on to an calling on Facebook CEO Mark Zuckerberg to implement more meaningful due process around the social network鈥檚 moderation of content on its platform.

Specifically, we are pushing Facebook and other internet companies to adopt the recommendations outlined in the , which were launched in May by OTI and a coalition of organizations, advocates, and academic experts who support the right to free expression online. That document outlines the minimum standards tech platforms should meet in order to provide adequate transparency and accountability around their efforts to regulate user-generated content and take action against user accounts that violate their platforms鈥 rules. The Principles demand three things:

  • Numbers: that is, detailed transparency reporting that explains how much content and how many accounts are affected by content moderation efforts;
  • Notice: users should get clear notice of when and why their content has been taken down or their accounts suspended; and
  • Appeals: there should be a robust process for affected users to appeal the platform鈥檚 decision to a human decision maker.

Our letter especially focuses on the need for a more robust appeals process at Facebook. The company a new process for appealing the takedown of individual posts earlier this year, but it only applied to a few specific categories of content鈥攏udity and sexual activity, hate speech, or graphic violence. The letter urges the company to continue expanding that process to all content takedowns regardless of category, while repeating calls for greater transparency.

Yesterday, Facebook demonstrated new progress in providing that transparency by issuing the second edition of its , which provides data about Facebook鈥檚 removal of user content based on violations of its community standards. Facebook released a detailed version of those in April, followed in May by the company鈥檚 first edition of the transparency report that was updated yesterday. Facebook is only the second company to attempt such a comprehensive transparency report on its content moderation practices鈥擥oogle issued its own in April, though it noticeably didn鈥檛 cover any other Google products. As we said at the time, those were both toward meaningful transparency and accountability, but more needs to be done.

Facebook took a few more steps with this latest edition of its content moderation transparency report, introducing data on two new categories of content鈥攂ullying and harassment and child nudity and child sexual exploitation. In addition, the report highlights that Facebook intends in the future to share data on how much content the company has taken down by mistake, based on the data it鈥檚 collecting through its new appeals process. These are both welcome developments.

However, as emphasized by the Santa Clara Principles and OTI鈥檚 recently-published Transparency Reporting Toolkit focused on content takedown reporting, there are also a number of additional improvements and next steps that we think Facebook should implement in the future to make its reports even more useful.

  1. One number to rule them all. Right now, Facebook鈥檚 report is broken down by types of content like hate speech and spam, giving data on how prevalent that category of content is on the platform and how much of that category Facebook acted on. However, what is still lacking鈥攊n part because the report still doesn鈥檛 cover all the different types of content that is barred by Facebook鈥檚 community guidelines鈥攊s a single combined number that expresses the prevalence of guideline-violating content overall, and a single combined number indicating how many pieces of content were taken down overall. We need a unified overview of all moderation activity on the platform, not just silos of activity around specific categories of content.
  2. More details about automated content identification and takedowns. As companies quickly move to identify and in some cases even remove content based on automated rather than human decision-making, transparency and accountability around that decision-making become even more important. Facebook has offered some sliver of transparency on this point. In detailing its moderation of terrorist content, Facebook offered specific numbers about how much terrorist content was detected by automated tools that can identify and take down new uploads of older content that鈥檚 previously been found to violate the rules. The same post also detailed how much content was identified by (presumably machine learning-based) tools that flag new potentially-terrorist content for human review. Providing such data for all categories of content should be a key feature of the next transparency report.
  3. How many people are affected? Although it鈥檚 important to know how much content is taken down as part of Facebook鈥檚 content moderation operations, it鈥檚 also important to understand how many human beings have been silenced as a result. Right now, there鈥檚 no data on how many Facebook users are directly affected鈥攏either data about how many users鈥 content is taken down, nor about how many accounts are temporarily or permanently suspended as a result of rule violations. We need both types of data to truly understand how many people these policies are impacting.
  4. How much content is flagged by users, versus how much is taken down? A great deal of the content taken down by Facebook was first identified thanks to flagging by users themselves. However, we don鈥檛 have a good idea of how many pieces of content are flagged, and how many of those are ultimately determined to be in violation of the rules. Having that data would be incredibly useful in understanding the volume of content that Facebook has to subject to human review, and how accurate or inaccurate users鈥 flagging behavior typically is. Both have implications for what policymakers and the public can reasonably expect from companies in terms of moderation, and may hold clues for how to better design moderation policies and procedures.
  5. Who flagged the content? Not all content flags come from regular users of a platform鈥攕ome come from 鈥渢rusted flaggers鈥 and government agencies with direct lines to the companies via 鈥淚nternet Referral Units鈥 (IRUs). The possibility of informal censorship through government pressure via these channels is very real, and raises serious human rights concerns that call for even more transparency. That鈥檚 why we think that Facebook and other companies also need to disclose how many flags come from such sources, with as much specificity as possible. Automattic, the maker of WordPress, has taken a first step in this direction by about how many IRU referrals it receives, but much more could be done, and Facebook could be the first company to publish comprehensive numbers in this area. All the better if that data, in combination with the data we asked for in #4, allowed us to compare the accuracy of the different flagger populations鈥攄o governments inaccurately flag content more than users do, or less? We want to know!

Although there is always more that can be done, we want to recognize this latest report as another valuable step towards promoting greater transparency around how Facebook is regulating users鈥 speech, and we hope to see more companies issuing similar reports soon. As the power of internet platforms to decide what we are allowed to say online grows, the need for transparency and accountability from those companies grows too. Mark Zuckerberg himself issued a lengthy on Thursday afternoon discussing the future of content governance at Facebook. The post suggests that the company will continue to move in the direction of increased transparency and accountability, which we hope is the case. Indeed, the future of online free expression depends on it.

More 麻豆果冻传媒 the Authors

Spandana Singh
Spandana Singh

Policy Analyst, Open Technology Institute

Pressing Facebook for More Transparency and Accountability Around Content Moderation