Meta Policy Rollbacks Blamed for Surge in Harmful Content Online

Sapatar / Updated: Jun 17, 2025, 18:06 IST 40 Share
Meta Policy Rollbacks Blamed for Surge in Harmful Content Online

Survey Reveals Sharp Spike in Harmful Posts Post-Meta Changes

A recent survey has uncovered a worrying trend: a marked rise in harmful content across Meta's platforms, including Facebook and Instagram, following the rollback of key content moderation policies. The report, conducted by a coalition of digital rights groups and watchdogs, indicates that the weakening of enforcement on hate speech, misinformation, and online abuse has led to a visible deterioration in platform safety, particularly in the run-up to major global elections.

Meta Accused of Prioritizing Profit Over Platform Safety

Researchers behind the study claim that Meta’s reduced investment in moderation is driven by cost-cutting measures and a broader shift toward automation. Since late 2023, the company has disbanded or downsized several teams responsible for handling political misinformation, fact-checking, and local language moderation, particularly in regions such as Africa, Asia, and Latin America. Advocacy groups argue this has left vulnerable communities exposed to coordinated disinformation campaigns, hate speech, and other harmful narratives.

Election Integrity Concerns Mount Worldwide

The timing of the rollback has sparked international concern as over 60 countries are set to hold national elections in 2024 and 2025. Analysts fear Meta’s relaxed oversight could allow election interference, voter suppression, and politically motivated violence to thrive unchecked. Digital rights organizations are urging the tech giant to reinstate pre-2023 safeguards to prevent democratic erosion and social unrest.

Meta Responds with Caution, Defends Strategy

In response, Meta has defended its policy changes, asserting that it is focusing on “strategic enforcement” and leveraging AI tools to detect violations at scale. A spokesperson emphasized that Meta remains committed to combating harmful content but acknowledged that resource allocation is being optimized globally. Critics, however, maintain that AI alone cannot replace human oversight, especially in nuanced and region-specific contexts.

Calls Grow for Regulatory Oversight

The findings have reignited calls for governments to impose stricter regulations on tech giants, especially as digital misinformation becomes a growing threat to democratic institutions. Civil society groups are lobbying for increased transparency from Meta regarding policy enforcement, algorithmic changes, and resource distribution across countries.