Balancing Free Speech and Misinformation: The Impact of Meta’s Decision to Remove Fact-Checkers

Photo by Dima Solomin on Unsplash

Mark Zuckerberg, founder, chairman, and CEO of Meta (formerly Facebook), recently announced the removal of fact-checkers from the platform in an effort to reduce censorship and promote free speech. Since its inception in 2004, Facebook has grown into one of the world’s most widely used social media platforms, with approximately 3 billion daily active users.  

As one of the most popular social media platforms in the world alongside X, Facebook connects billions of people and plays a major role in shaping online communication and discourse. However, the decision to remove fact-checkers has sparked a lot of debate. While it highlights the value of free speech, it also creates risks by allowing hate speech and misinformation to spread more easily. This raises questions about how Facebook can balance its huge influence with the need to keep the platform safe and trustworthy for its users.  

The Role of Fact-Checkers in Combating Misinformation  

Fact-checkers are independent organisations or individuals who review content to verify its accuracy. Their role is critical in identifying and addressing misinformation on social media platforms. Meta employs fact-checkers to review posts flagged as potentially false. This can happen through automated detection, user reports, or fact-checkers’ own observations. By verifying the accuracy of content and taking actions to label, limit, or remove misinformation, Meta has aimed to create a safer and more trustworthy environment for its billions of users.  

This initiative is a positive step, as it helps combat the risks of misinformation which can have serious real-world consequences. However, Mark Zuckerberg has recently decided to move away from third-party fact-checkers, citing concerns that these organisations are “too politically biased.” He argues that this bias has caused too many users to feel censored for sharing their opinions. To address this, Meta is introducing “Community Notes,” a system where users can add context to questionable posts instead of relying on professional fact-checkers.  

This approach is similar to the system used by X (formerly Twitter), which also relies on Community Notes to provide context to potentially misleading or false posts. On X, users can collaboratively add notes to posts that others might find questionable, with the aim of giving more context without outright censoring the content. While this method allows for greater user involvement, it has faced criticism for being inconsistent and potentially allowing misinformation to persist if the majority of users support false narratives.  

While Meta’s new change of implementing the Community Notes system is meant to reduce censorship and give users more control, it raises concerns in terms of misinformation. Without fact-checkers, it may become harder to control the spread of misinformation or harmful content. Community Notes might not hold the same high standards or fairness as trained professionals, which could let false or biased posts spread more easily.  

This move shows the tricky balance Meta faces between protecting free speech and keeping its platforms safe and accurate.  

The Consequences of Removing Professional Fact-Checkers 

As a trained media monitor for Get the Trolls Out! I have developed the skills to easily distinguish misinformation. However, for those without the same experience, identifying what is true or false can be a real challenge. For example, in my monitoring, I have observed a troubling rise in antisemitic misinformation on social media platforms such as X and Meta, including Holocaust distortion.  

This is particularly alarming because misinformation like this does not just mislead but it actively fuels hate speech against certain communities. Holocaust distortion, for example, minimises or denies the horrors faced by Jewish people, which not only disrespects the victims but also emboldens antisemitic hate groups. When left unchecked, such false narratives can quickly spread, normalising dangerous stereotypes and encouraging hostility toward vulnerable communities.  

Furthermore, the risks of unchecked misinformation are not just theoretical but can often also have real-world consequences. One example of this happened in Southport, England, last summer. False rumours and misleading content spread rapidly on social media, falsely accusing the Muslim community of being involved in criminal activity. This misinformation sparked racial tensions and led to violent clashes across the UK. Social media platforms, where these rumours festered, allowed harmful narratives to spread unchecked, exacerbating the situation.  

The Southport riots demonstrate how quickly misinformation can escalate into physical conflict, especially when it targets specific communities and relies on false or distorted narratives. As was seen, the spread of unchecked content can easily lead to real-world violence, which can have long-lasting effects on both the affected communities and the wider society. 

Meta’s decision to remove professional fact-checkers in favour of Community Notes could make situations like this even worse. Without the expertise and oversight of trained fact-checkers, the responsibility for identifying and addressing harmful misinformation falls to the general user base—many of whom may lack the skills or context to recognise antisemitic tropes or other forms of hate speech. This creates a risk that harmful content will remain unchecked or even gain credibility if enough users fail to challenge it.  

By moving away from fact-checkers, Meta risks creating an environment where hate speech and dangerous misinformation can thrive. This not only threatens the safety of targeted communities but also undermines the trust and integrity of the platform itself. Addressing complex and harmful narratives like Holocaust distortion requires specialised knowledge which cannot simply be replaced by a system like Community Notes that relies on regular users.  

Hate Speech Stats on Meta 

According to a 2021 report, the prevalence of hate speech on Meta dropped by nearly 50%. By 2023, Meta had increased its removal of reported antisemitic content to 35%, up from 23% in 2021. While these figures show progress in tackling hate speech, Meta’s decision to move away from professional fact-checkers raises worries that this progress could be lost and hate speech ultimately might start increasing again.   

As argued before, fact-checking systems like Community Notes are not reliable enough to fully replace professional fact-checkers. A hybrid approach that combines both methods could offer a better balance of accuracy. Furthermore, there is limited evidence suggesting that Community Notes on X has significantly reduced hate speech. Reports indicate that many misleading or harmful posts remain unchecked. For example, a study found that 74% of U.S. election-related misinformation did not receive accurate corrections through Community Notes. This shows that the system struggles to keep up with quickly spreading false information.  

Conclusion: Balancing Free Speech and Accuracy 

To conclude, while Meta’s decision to remove professional fact-checkers in favour of Community Notes may be motivated by a desire to promote free speech and reduce censorship, it presents significant risks in terms of misinformation and harmful content. While Community Notes may encourage free speech, it lacks the expertise and consistency of professional fact-checkers, making it harder to address complex issues like Holocaust distortion. Evidence from platforms like X shows that such systems often struggle to combat misinformation effectively. A hybrid approach, combining both professional oversight and user involvement, would strike a better balance between accuracy and free speech, ensuring a safer and more trustworthy platform. 

 

 

Next
Next

10th Anniversary of the Paris Terror Attacks