Meta Mistakenly Moderating Too Much

Meta is inadvertently removing excessive content across its platforms, a senior executive admitted on Monday.

Nick Clegg, Meta’s President of Global Affairs, acknowledged that the company’s content moderation error rates remain too high, despite years of investment in moderation systems. He assured reporters that Meta is focused on improving its ability to enforce its rules more accurately.

“We recognize that when enforcing our policies, our error rates are still too high, which interferes with the free expression we aim to support,” Clegg said during a press call I attended. “Too often, harmless content is removed or restricted, and too many users are unfairly penalized.”

Clegg also expressed regret over the company’s aggressive content removal during the COVID-19 pandemic. CEO Mark Zuckerberg recently testified before the Republican-led House Judiciary Committee, revealing that the decision to heavily moderate content was influenced by pressure from the Biden administration.

“We had very stringent rules during the pandemic, leading to the removal of large volumes of content,” Clegg explained. “At the time, no one knew how the pandemic would unfold, so, with the benefit of hindsight, we now feel that we overdid it a bit. We are acutely aware, especially because users voiced their complaints, that we sometimes over-enforced and made mistakes by removing or restricting innocuous content.”

Clegg’s remarks point to a broader issue at Meta, where the company’s automated systems have become too heavy-handed despite the billions of dollars spent annually on moderation. The company’s recent failures have been particularly noticeable on Threads, where takedown errors have been widespread. One such error saw photos of President-elect Donald Trump surviving an assassination attempt being mistakenly suppressed. Furthermore, Meta’s Oversight Board recently warned that these moderation errors could result in the “excessive removal of political speech” ahead of the U.S. presidential election.

Meta has not yet made significant updates to its content policies since the election, although it appears that substantial changes may be on the horizon. Clegg referred to the rules as a “living, breathing document,” suggesting they are continually being evaluated and adjusted.

When asked about Zuckerberg’s recent dinner with Trump and whether Meta would still resist government pressure to moderate content, as Zuckerberg previously stated before the House Judiciary Committee, Clegg avoided answering directly.

“I can’t give a running commentary on conversations I was not part of,” Clegg responded when asked about the dinner. “The administration is still being formed, and the inauguration hasn’t occurred, so those conversations are clearly at a high level. Mark is very eager to play an active role in the debates any administration needs to have about maintaining America’s leadership in technology, which is crucial given the geostrategic uncertainties around the world, particularly the pivotal role AI will play in that scenario.”

Clegg’s comments illustrate Meta’s ongoing struggles with balancing content moderation while safeguarding freedom of expression. The company has come under increasing scrutiny for its content removal practices, particularly in politically sensitive contexts. Despite these challenges, Meta seems committed to refining its moderation systems to address concerns and ensure that its platforms do not stifle discourse, even as it navigates pressures from governments and users alike.

As the company looks ahead to the future, Clegg’s statement hints at potential shifts in Meta’s approach, but it remains unclear how the company will adjust its policies to handle the growing complexity of moderating content in an era of heightened political and social tensions.

Latest articles