Spandana Singh
Policy Analyst, Open Technology Institute
Over the past year technology companies such as Google, Facebook, and Twitter have faced from governments to significantly augment their efforts to counter violent extremism (CVE) online. These calls for action have come in the wake of terror attacks in countries such as the , , , and , which were found to have been facilitated by extremists who had been radicalized online via these platforms.
Although government officials in countries such as the United States have major technology platforms to improve their efforts to counter extremism, some governments have gone a step further by introducing legislation intended to guarantee results. At the beginning of 2018, for example, Germany introduced the , a hate speech law which requires social network platforms with over two million members to remove fake news and hate speech (which includes extremist content), within 24 hours of the content being flagged to the company. If a company fails to meet this deadline, it risks facing fines of ($61.5 million).
Similarly, in the United Kingdom, Prime Minister Theresa May, who has been one of the strongest global advocates for increased policing of online content, has pushed technology companies to remove hate speech and extremist content . May鈥檚 call to action came after Policy Exchange, a British think tank, released a that stated online jihadist propaganda has higher engagement rates in the United Kingdom than in any other European nation. The report, in conjunction with a led by extremists across the United Kingdom, has raised concerns regarding the extent and potential of online radicalization channels. Other members of the British government have also put forth proposals for increased regulation of technology companies in this regard. Security Minister Ben Wallace, for example, that platforms should face tax punishments if they do not remove extremist content in a timely manner, and Sadiq Khan, the Mayor of London, recently that other nations will follow in Germany鈥檚 footsteps and clamp down on these companies should they not improve their efforts.
In response, major tech companies have worked to ramp up their efforts to take down extremist content. Following the , Facebook, Microsoft, Twitter and YouTube came together to create the the . The Forum aims to strengthen technology company-led CVE approaches by facilitating resource sharing (resulting in the creation of a ) and workshops where larger companies can impart knowledge on best practices in the field to smaller platforms.
There is no doubt that the presence of extremist groups and radical content online is problematic and dangerous. However, there has been relatively little research conducted on the efficacy of content and account moderation efforts on countering violent extremism. In addition, the research that has been conducted does not permit us to draw meaningful conclusions about what approaches are effective. This is because there is a dearth of clear definitions, standardization in approaches, and established metrics for assessing success in this field. Without a clear understanding of what approaches work best and how they can be expanded in scope and strategy, there is a real risk that tech companies are wasting their efforts and resources on unproven methods. Continued governmental attempts to intimidate companies into ramping up their efforts to take down content is also problematic because it forces companies to continue implementing approaches that could be having deleterious effects (such as , ), rather than taking time to identify which approaches are truly impactful, and how they can be made more strategic and effective.
In my forthcoming policy paper for the Millennials Initiative at 麻豆果冻传媒, I highlight a number of ways in which researchers can broaden and improve the evaluation frameworks they have thus far applied to assessments of content and account moderation efforts on extremist groups online. In addition, my paper makes recommendations on how companies individually and collectively can bolster future research and evaluation of these moderation efforts. In particular, I urge that companies expand the granularity of their transparency reporting on their content moderation efforts and collaborate with one another to establish clear metrics and standards for success.
This blog is part of Caffeinated Commentary – a monthly series where the Millennial Fellows create interesting and engaging content around a theme. Because the fellows are hosting聽a symposium focused on elevating new voices and policy ideas聽this month, they will each create content around their own policy research topics.