IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image

Meta shifts to user-driven content moderation model

Yesterday

Mark Zuckerberg has announced a significant shift in content moderation strategy for Meta platforms, opting to replace third-party fact-checking mechanisms with a community-driven model known as Community Notes. This new approach aims to incorporate user-generated input to provide additional context and caveats to posts that may be deemed misleading or require further clarification.

Zuckerberg acknowledged that this change could result in a decrease in the platform's ability to efficiently filter harmful content, stating, "We're going to catch less bad stuff." Meta's decision has sparked a renewed debate surrounding the balance between upholding free speech and the moderation of potentially harmful content online.

Mark Jones, a partner at Payne Hicks Beach, expressed scepticism over Meta's new approach. "Is delegating moderation of content to other users the best way of moderating content and creating a safe online space?" Jones questioned. He suggested that such delegation might exacerbate the spread of misinformation and disinformation, particularly if users are incorrect in their assessments.

Interestingly, at this stage, Meta has no immediate plans to implement these changes within the European Union, although Jones speculated about whether this might only be a temporary situation.

Adding to the discourse, Iona Silverman from the law firm Freeths, highlighted potential conflicts between Meta's move and the UK's Online Safety Act. "The justification for the removal of fact checkers seems to remove any bias or inhibition of free speech," she recognised. However, she noted the contradiction between Zuckerberg's admission of potentially less effective content moderation and the requirements of the Online Safety Act, which obligates social media platforms to prevent UK users, especially children, from accessing harmful content.

The Online Safety Act, which aims to protect users and particularly children, mandates that regulators like Ofcom provide guidance and require platforms to conduct risk assessments for potential harms starting from this spring. Silverman voiced concerns regarding the efficacy of the act's regulations, suggesting swift action is necessary to ensure compliance. She proposed considering measures such as those in Australia, where under-16s are banned from social media, could be a practical response to safeguarding young users.

This development by Meta mirrors a wider trend in social media platforms examining alternative approaches to content moderation. Comparable models have been observed, notably within X, formerly known as Twitter, which also utilises user-based inputs for adjudicating dubious content.

As this strategy unfolds, it will undoubtedly contribute to ongoing discussions over the efficacy and ethics of user-driven moderation versus professional oversight. The extent to which such alternatives can prevent the spread of false or harmful information remains to be examined as platforms like Meta navigate the complexities of balancing user engagement and responsible stewardship.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X