Caravan Magazine

A journal of politics and culture

Business

Meta’s Fact-Checking Change: What It Means for Misinformation on Facebook and Instagram

Meta’s recent decision to end its fact-checking program in favor of a crowdsourced model is sparking intense debate over its potential consequences for online misinformation. Just ahead of Donald Trump’s return to the presidency, Meta is shifting its approach to content moderation in a move that prioritizes “free expression,” a stance that has both supporters and critics questioning what this means for the future of misinformation and hate speech on Facebook and Instagram.

For years, Meta, which owns Facebook, Instagram, and Threads, has funded third-party fact-checking organizations to review content. However, the company has faced increasing pressure, particularly from conservative groups, who argue that these efforts disproportionately target right-wing voices. Trump himself had even threatened Meta’s CEO, Mark Zuckerberg, with legal action if the platform interfered with the 2024 election. As part of an effort to mend relations, Zuckerberg donated $1 million to Trump’s inaugural fund and appointed conservative Joel Kaplan as Meta’s global policy chief. Kaplan’s appointment has brought about a key shift in Meta’s content moderation policies, embracing a system similar to Elon Musk’s Community Notes at X (formerly Twitter), where unpaid users, not experts, help police content.

In a video statement, Zuckerberg acknowledged that this shift could lead to less effective content moderation. He admitted, “We’re going to catch less bad stuff.” When asked if the change was a result of Trump’s past threats, Trump responded, “Probably.”

While many conservatives and free-speech activists applaud the move, critics fear it could worsen the spread of misinformation. Valerie Wirtschafter, a fellow at the Brookings Institution, argues that the crowdsourced model, while valuable, is untested on a large scale. “Meta already struggles with bad content, and this could make things worse,” she warns.

This new approach comes on the heels of a long and complicated history of Meta’s attempts to combat misinformation. Following the 2016 U.S. elections, Facebook launched its fact-checking initiative in response to concerns over foreign interference and the spread of false claims. Yet, despite these efforts, Meta’s moderation systems have been flawed. For instance, in 2017, the company faced criticism from Amnesty International for failing to prevent violence in Myanmar, and in 2021, a report revealed that Facebook could have prevented billions of views on election misinformation but failed to adjust its algorithms.

The intensifying polarization over content moderation came to a head in 2020 when Facebook, under pressure from the Biden administration, cracked down on COVID-19 misinformation, including controversial claims about the virus’s origins. These efforts backfired, with some experts later supporting the “lab leak” theory. Amidst mounting criticism, Zuckerberg opted to de-prioritize news on the platform.

In 2022, Elon Musk’s acquisition of Twitter marked a shift in content moderation. Musk disbanded the company’s safety teams, promoting Community Notes—a volunteer-driven model in which users provide corrections and context. While early studies showed some success in combating misinformation on X, others pointed to flaws, such as the failure to distribute accurate notes widely, allowing misleading posts to proliferate.

Meta’s shift toward a similar model, starting in the U.S., is drawing mixed reactions. Kaplan, a former deputy chief of staff under President George W. Bush, presented the change on Fox & Friends, stating that it would “reset the balance in favor of free expression.” Zuckerberg, who recently met Trump at Mar-a-Lago, also voiced concerns over the perceived political bias of fact-checkers and announced that restrictions on controversial topics like immigration and gender would be lifted.

Trump has praised the change, describing it as a positive step, while some Republicans may see it as an opportunity to reassess social media regulations, including potential changes to Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content.

However, the policy shift has left many social media experts and misinformation researchers deeply concerned. Public Citizen criticized the move, stating that it would lead to more dangerous misinformation on Meta’s platforms. Tech journalist Kara Swisher also defended fact-checkers, arguing that they weren’t the problem—“Toxic floods of lies on social media” have eroded trust, not fact-checking.

Wirtschafter, who has studied similar crowdsourced moderation systems, warned that Meta’s implementation appears to be hastily executed. Unlike Twitter, which has refined its Community Notes program over time, Meta is rolling it out without proper testing or adjustments for its diverse platforms—Facebook, Instagram, and Threads—each with unique content and user bases. With challenges already related to spam and AI-generated content, she questions the feasibility of relying on user-driven content moderation to resolve these issues.

Luca Luceri, a research assistant professor at USC, echoes these concerns, highlighting the risks of manipulation and the potential for harmful content to be amplified. He also points out that issues like mental health or eating disorder-related content may suffer without adequate moderation.

Additionally, the shift away from fact-checking could have significant consequences for the fact-checking industry itself. Meta’s partnerships with fact-checking organizations accounted for 45% of the industry’s total income in 2023. The elimination of these partnerships could deal a serious blow to a sector that is already underfunded.

As Meta moves forward with its new approach, the broader implications for misinformation, content moderation, and online trust are far from clear. With a growing reliance on crowdsourced content policing, Meta’s experiment could reshape the digital landscape in unpredictable ways.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *