Addressing the security risks of anti-Roma hate speech on social media platforms
This paper examines online hate speech and the associated security risks by focusing on user-generated content (UGC) targeting Roma and related moderation standards, tools, processes, and practices. The Romani people have experienced systemic racism, discrimination, and hostility across countries. These negative attitudes are perpetuated, broadcast, and intensified in the online space in the form of hateful and racist speech or incitement to violence and genocide. The paper illustrates cases of anti-Roma narratives and their translation into the online realm while mapping their harmful impacts on the Romani communities and individuals. These observations are instrumentalized to examine the challenges and tensions that platforms encounter in moderating online content. It is proposed that AI-based detection tools are integral to tackling hate speech, but due to the highly contextual nature of hateful content and its differentiated risks, they are unfit for being the exclusive means for decision-making. Content moderation models can be effective when implemented holistically with each layer extending the security and compensating for the limitations of the other. Social media providers must follow a victim-sensitive approach to tackle the asymmetric threats that hate speech presents to minority, marginalized, and other vulnerable groups.