麻豆果冻传媒

In Short

Free Speech vs Online Safety? You Shouldn鈥檛 Have to Choose

All Users Need Content Moderation to be Equitable, Transparent, and Consistent

facebook hate speech
Creative Lab / Shutterstock

Elon Musk鈥檚 pending () of Twitter鈥攐ne of the world鈥檚 largest social media platforms鈥攈as generated significant controversy and speculation. Should the deal go through, Musk鈥檚 newfound control of the company is expected to bring numerous changes. Over the past few months, he has made several comments about how he would improve the platform, ranging from its reliance on advertising revenue, to the company鈥檚 content algorithms, to Twitter into a platform that is a safe haven for a broad range of online speech, including speech that could be potentially harmful to certain users and communities. Musk鈥檚 stance on free speech has particularly raised alarm bells, as it could have profound consequences for how the service moderates and manages online content.

Over the past several years, civil society organizations, including OTI, have called on internet platforms to stop creating opaque and weak content policies and enforcing them inconsistently. Musk鈥檚 announcements come at a time when the real world consequences of such flawed content moderation policies and practices are becoming more apparent every day, especially for already marginalized communities around the world. During this time, it is critical to remember that free speech and online safety must be considered in tandem with one another, and that neither are experienced equally by different individuals and communities.

The classic Evelyn Beatrice Hall phrase, 鈥淚 may disapprove of what you say, but I will defend to the death your right to say it,鈥 reflects the attitude many online advocates hold toward free speech, Elon Musk. The initial rise of social media was heralded as a win for free speech, as it reduced barriers to sharing information and allowed individuals across the globe to communicate at scale. However, over the past decade and a half, it has become evident that online spaces such as social media platforms are not inherently equitable, and that we need to reexamine the relationship between online speech and safety.

Research that marginalized and vulnerable groups, including women, people of color, the LGBTQ community, and religious minorities, often face of hate and harassment in these online spaces, as they do in offline ones. These harmful experiences often the free speech of these individuals and curtail their ability to engage and communicate on these platforms freely.

For example, research shows that female and are often targets of gendered , impacting their mental health, job performance, and if and how they engage with online spaces. Similarly, a Pew from 2017 shows that 59% of Black and 48% of Hispanic internet users reported experiencing online harassment, including being called offensive names, being purposefully embarrassed online, and being subject to physical threats, stalking, sexual harassment, or harassment over a sustained time period. Additionally, online hate and harassment against the LGBTQ community have been on the rise. According to GLAAD鈥檚 , 64% of LGBTQ social media users reported experiencing harassment and hate speech, including on platforms such as Facebook, Twitter, YouTube, Instagram, and TikTok. This kind of harassment can to self-harm and suicidal behaviors in individuals, particularly among . The experiences of these various groups are shared by many other marginalized communities, including religious minorities and ethnic groups.

Additionally, many internet platforms more time and resources towards identifying and addressing harmful speech in English-speaking communities and in Western nations. As demonstrated with the in Myanmar, the in Ethiopia, and communal violence in countries like and , failure to remove and prevent the amplification of harmful content can contribute to profound offline consequences, including violence and death.

Further, in many situations, platforms鈥 automated tools and human reviewers lack sufficient training to accurately moderate content in certain languages. This can undermine the legitimate speech of many communities. For example, several Arabic-speaking social media users have had their posts moderated for using colloquial expressions or slang words that translate into 鈥渂omb,鈥 鈥渕issile,鈥 and 鈥渕artyr.鈥 These errors demonstrate the limitations of content moderators and their tools in understanding critical language and region-specific context and nuances, and how these limitations can exacerbate biases against already profiled and scrutinized communities. In many instances, platforms also opaquely of journalists, activists, and political opposition members in the Global South, raising concerns around government influence over online speech.

As companies navigate the increasingly complex content moderation landscape, and the changing relationship between online speech and safety, they should keep a few key principles in mind:

Content policies must be clear and consistently enforced: For many years, internet platforms did not publicly disclose the policies they use to guide their content moderation practices. This began to in 2018, and thanks to sustained civil society pressure, these kinds of disclosures have become standard practice. However, many of these policies fail to recognize the unique needs and experiences of their diverse user bases, and lack clarity and nuance. Additionally, companies often enforce policies inconsistently and provide politicians and world leaders broad exemptions from their rules. This undermines the integrity of platform content moderation practices and allows for online harm against certain communities to continue.

Companies need to invest more in stakeholder engagement: Many platforms consult with community groups and advocates when developing their content policies and associated technologies and practices. However, oftentimes their stakeholder engagement is limited to countries or communities with a large market share or regulatory power, or they engage with vulnerable groups after they have suffered a consequential event. This means that already vulnerable and marginalized groups are further in decision-making processes that could impact their online speech and safety. Companies should expand their stakeholder engagement practices across the globe, engage in ongoing risk assessments to identify communities and locations that require greater attention, and perform ongoing to evaluate the effects the company has had on certain communities.

Transparency around the scope and scale of platform content moderation and curation efforts is key for accountability: Currently, it is difficult for civil society, researchers, and policymakers to examine how platforms鈥 content moderation and curation practices are impacting the speech and experiences of different communities, as these platforms provide very little transparency. Beginning in 2018, some companies began issuing transparency reports outlining the scope and scale of some of their content moderation efforts. As OTI has noted, this is a good first step, but there are many ways that companies can improve.

For example, companies should provide more granular reporting on their content moderation efforts across different languages, including data the language and geographical distribution of their content moderation and the amount of content platforms reinstated after identifying errors or receiving appeals. Additionally, companies should provide more information on how effective their efforts are across different categories of content, such as hate speech and disinformation, as this is key for understanding which mitigation efforts work best. In situations where user privacy is a concern, companies should also provide vetted researchers with access to their content moderation data so these individuals can independently examine the structure and impacts of company content moderation processes. Companies can and should provide transparency around their content moderation and curation efforts in a number of formats and avenues, depending on the intended audience and use case. Transparency is key to enhancing understanding of how platforms are operating and what impact they are having, which is, in turn, critical for generating accountability.

Users need more agency and control: In many circumstances, content moderation can feel like a one-way, top-down process in which platforms make moderation decisions and users have little agency to question or push back. But it shouldn鈥檛 be. Platforms must users when their content or accounts are flagged or removed, and inform users who flag harmful content or accounts about the outcome of the moderation decision. Additionally, platforms must both categories of users with access to a timely appeals process. These are ways of augmenting user agency and engagement in the content moderation process and providing much needed transparency, remedy, and redress.

Additionally, companies should invest more resources in developing tools for users who regularly face hate and harassment. This includes, for example, mass reporting tools and the ability to mute or restrict how certain accounts can engage with the user.

As business leaders like Elon Musk seek to change today鈥檚 largest social media platforms, they must remember that while these platforms provide critical opportunities to share content and exercise free speech for some, for others they can serve the opposite function. Both free speech and safety should be integral pillars of the online ecosystem. However challenging it is to navigate the complex relationship between the two, it is imperative that platforms invest more resources in ensuring their content moderation policies and practices are equitable, transparent, and consistent. This will help all users realize the benefits of online speech and protect those most vulnerable from the threat of violence and harassment.

More 麻豆果冻传媒 the Authors

Spandana Singh
Spandana Singh

Policy Analyst, Open Technology Institute

Free Speech vs Online Safety? You Shouldn鈥檛 Have to Choose