Printed from
TECH TIMES NEWS

Meta’s New ‘Vibes’ AI Feature Sparks Outrage After Platform Flooded With Explicit Deepfakes and Child Safety Concerns

Deepika Rana / Updated: Mar 12, 2026, 17:25 IST
Meta’s New ‘Vibes’ AI Feature Sparks Outrage After Platform Flooded With Explicit Deepfakes and Child Safety Concerns

Meta is facing intense scrutiny after reports emerged that its AI-powered “Vibes” feature has been circulating explicit deepfakes and disturbing synthetic videos on its platform. The feature, which uses generative AI to create and recommend short-form videos based on user interests, was introduced to boost engagement across Meta’s social media ecosystem. However, watchdog groups and researchers claim the tool is now being misused to generate and spread inappropriate content.

The controversy has triggered a fresh debate around the risks associated with generative AI and the responsibility of tech companies to prevent abuse of such systems.


Allegations of Explicit Deepfakes and Harmful Content

According to multiple digital safety researchers, the “Vibes” feed has been populated with sexually explicit synthetic videos, including manipulated clips featuring Bollywood celebrities and other public figures. In some reported cases, the platform allegedly recommended AI-generated videos that appeared to depict minors in inappropriate scenarios.

Experts say such content is often produced using deepfake technology, where artificial intelligence alters or fabricates visuals to create realistic but false media. While the videos may not depict real events, their circulation raises serious ethical and legal concerns.


Growing Concerns Over AI Misuse

The incident highlights the broader challenge faced by tech companies deploying generative AI tools. While such technologies can power creativity and entertainment, they can also be exploited for harassment, misinformation, or exploitation.

Child safety organizations and digital rights groups have urged Meta to implement stronger moderation systems. They argue that AI-generated recommendation tools must include strict filters and detection systems to prevent harmful material from spreading.

Some researchers have also pointed out that recommendation algorithms may unintentionally amplify problematic content when engagement signals drive visibility.


Impact on Celebrities and Public Figures

Bollywood actors and other public personalities have frequently been targeted by deepfake creators in recent years. With AI tools becoming more accessible, manipulated videos and images can now be produced with minimal technical knowledge.

Industry observers warn that the presence of such content on mainstream social platforms could damage reputations, spread misinformation, and expose individuals to online harassment. Calls are growing for stricter laws governing the creation and distribution of deepfakes.


Meta’s Ongoing Moderation Challenges

Meta has repeatedly stated that it invests heavily in content moderation technologies and safety teams. The company has also introduced policies targeting non-consensual intimate imagery and manipulated media.

However, critics say the rapid development of AI tools is outpacing moderation systems. The “Vibes” controversy may push Meta and other technology companies to strengthen automated detection and human review mechanisms.

Regulators in several countries are already examining how generative AI platforms should be governed to protect users, particularly children.


Broader Debate Around AI Regulation

The situation reflects a larger global conversation about AI ethics, online safety, and platform accountability. Governments and technology experts are increasingly calling for regulatory frameworks that ensure AI systems are deployed responsibly.