You are currently viewing Cracking the Code of Content Moderation 

Cracking the Code of Content Moderation 

USC researchers are creating science-backed tools to improve social media content moderation policies. 

Flagging, demotion, and deletion of content; temporary or permanent suspension of users – these are some of the interventions used to keep social media platforms safe, trustworthy, and free from harmful content. But what is the best way for these interventions to be implemented? Luca Luceri, a research scientist at USC’s Information Sciences Institute (ISI), is part of a team that is using science to guide social media regulations.  

Luceri is working on CARISMA (CAll for Regulation Support In Social MediA), an interdisciplinary research project funded by the Swiss National Science Foundation with the goal of “establishing a clear, traceable, and replicable methodological framework for assessing policies that effectively mitigate the harms of online actors responsible for abusive and illicit behavior.” 

But in order to assess social media content moderation policies, first, they must understand them. Luceri explained, “Content moderation strategies change frequently. They’re not clearly communicated or transparent. There are no guidelines about the potential interventions, for example, how many times you have to do a certain action to be suspended temporarily or permanently.” 

He has recently co-authored two papers for CARISMA: “These papers are the first attempt to better understand how moderation policy strategies work, if they work, and what kind of misbehavior they are capable of identifying and moderating.”  

Read more here