Why is Meta shutting down fact-checkers? | Explained

The story so far: On January 7, Meta CEO Mark Zuckerberg said the company will get rid of fact-checkers and simplify content policies by removing restrictions on topics as it is “out of touch with mainstream discourse.” In a five-minute video, he said that the company will return to its roots as the fact-checkers have been “too politically biased” and “destroyed more trust than they created, especially in the U.S.”

How did Meta get into fact-checking?

After the 2016 U.S. presidential election results were out, Meta, then known as Facebook, faced serious backlash for amplifying political posts that helped tilt the election in favour of U.S. President-elect Donald Trump. To build back its reputation, Facebook roped in content moderators globally and developed technology to filter harmful content.

Meta started its independent fact-checking programme in partnership with the International Fact-Checking Network (IFCN) and the European Fact-Checking Standards Network (EFCSN). Over time, Meta became one of the largest donors to IFCN.

Meta worked with fact-checkers to address misinformation on its platforms, Facebook, Instagram and Threads. While the fact-checkers worked on finding misinformation and rating them based on the seriousness of content violation, Meta followed up with action and informed users of the measures it took. Beyond fact-checking, partner organisations worked across Meta’s platforms to carry out research, and rate content on a qualitative scale — false, altered, partly false, missing context, satire, and true. Per IFCN’s ‘State of Fact-Checkers in 2023’ report, income from Meta’s Third-Party Fact-Checking Programme and grants remain fact-checkers’ predominant revenue streams. And 68% of fact-checking organisations have 10 or fewer employees, whereas only 6.6% employ 31 or more people.

Why was there a need for fact-checkers?

Fact-checkers play a vital role in finding false and misleading content promoted on social media platforms by domestic accounts, and at times by foreign regimes. They also played a crucial role during the COVID-19 pandemic by correcting misinformation on social platforms. If the rated content on Meta is false or altered, its distribution across Meta’s apps will be reduced. If key information is missing or the content is satirical, Meta might provide the needed facts. Content rated poorly by a fact-checker may not be suggested to users, and repeat offenders could be hit with penalties such as restricted reach, being unable to monetise their content or turn their content into a news page.

What other steps were taken by Meta?

Apart from relying on fact-checkers, Meta set up an Oversight Board to adjudicate cases involving serious content policy violations. The board heard serious content violation cases and made binding decisions to uphold or overturn Meta’s own actions. Gradually, Meta started to move away from news content in general to keep its platform free from disinformation-prone content. The company said it will not “proactively recommend content about politics on recommendation surfaces across Instagram and Threads”, noting that it wants these apps to be a “great experience” for all.

Now that is starting to change under Joel Kaplan, Meta’s new chief of global affairs. Mr. Kaplan said “civic content” about elections and politics would return to the apps, and that users can choose what they want to see. He expanded on Mr. Zuckerberg’s video clip, noting that the platform will also get rid of a number of restrictions “on topics like immigration and gender identity that are the subject of frequent political discourse and debate”. He said, “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”

What is Community Notes?

Meta will be moving towards an X platform styled content moderation system called ‘Community Notes’. Under this model, instead of a centralised authority taking action against misinformation, users work together to add additional context that will appear under false or even blatantly illegal content.

The feature itself goes beyond the Elon Musk-era Twitter. It was originally conceived as Birdwatch, where users could add context to posts they see that require more information. Initially launched in 2021, the feature gained prominence in March 2022 when misinformation on the Russia-Ukraine conflict was rife on the platform.

While crowd-sourced fact-checking is seen as a better way of implementing content moderation, Community Notes can also reflect biased majority viewpoints regarding controversial subjects. The feature can also be slow, meaning false or hateful content may go viral long before the specific context is added to clarify the post. Furthermore, under X’s current style of content moderation, even posts glorifying slavery and Nazism are allowed to remain online, unchallenged.

In an open letter to Mr. Zuckerberg, IFCN said, “There is no reason Community Notes couldn’t co-exist with the third-party fact-checking programme; they are not mutually exclusive. A Community Notes model that works in collaboration with professional fact-checking would have strong potential as a new model for promoting accurate information.”

But, Meta has decided on a hands-off approach to content moderation, by stopping its demotion of fact-checked content and removing the full-screen warnings over flagged posts. The company will focus more on illegal content and high-severity violations while adjusting content filters to make it more challenging to take down flagged content — even though this means catching “less bad stuff,” as per Mr. Zuckerberg.

What is the significance of the new policy?

Mr. Zuckerberg’s shift in stance comes at a time when a new Trump administration takes charge. The tech CEO dined with the President-elect at his Mar-a-Lago resort in November to repair a fraught relationship. Earlier, he publicly praised Mr. Trump’s conduct after the former President survived an assassination attempt. In the video clip, Mr. Zuckerberg referenced the 2024 election as a “cultural tipping point” and committed to “restoring” free expression across Meta platforms. In essence, he was aligning his values with the new conservative government by making a clean break from the old one.

While the plan to end the fact-checking programme in 2025 applies only to the U.S., there are similar programmes Meta is running in more than 100 countries. Some of these countries are highly vulnerable to misinformation that spurs political instability, election interference, mob violence and even genocide. If Meta decides to stop the programme worldwide, it is almost certain to result in real-world harm in many places, the IFCN said in its open letter.

Published - January 12, 2025 12:40 am IST