Meta, the parent company of Facebook and Instagram, is preparing to use publicly available user data from its European platforms to train its artificial intelligence (AI) models, sparking renewed debates around data privacy and ethical AI practices across the continent.
According to internal sources and updates to Meta’s privacy policies, the tech giant is revising how it utilizes content shared publicly by users in Europe. The company aims to incorporate posts, comments, and publicly visible profile information into the datasets that power its next-generation AI tools. This initiative aligns with Meta's broader push to stay competitive in the generative AI space against rivals like OpenAI, Google, and Anthropic.
What Data Will Be Used?
The data being considered for training includes only publicly shared content. Meta emphasizes that private messages, non-public posts, and content shared within limited groups or private accounts will not be used. Additionally, Meta insists that data from users under the age of 18 will be excluded from AI training datasets.
This mirrors Meta’s previous approach in other global markets, but Europe presents unique legal and regulatory hurdles due to the General Data Protection Regulation (GDPR).
Compliance and Controversy
Meta argues that its data usage plans are compliant with the GDPR, stating that users have the right to object to the use of their data and can do so through updated privacy controls. However, critics warn that the default opt-in nature of this system may go unnoticed by many users, and that "public data" does not necessarily equate to informed consent for AI training purposes.
Privacy advocates across Europe have expressed concern. “Even if data is public, that doesn't mean it was shared with the expectation of becoming training fodder for AI,” said Eva Pohlmann, a data rights analyst with a German privacy NGO. “Meta must be transparent, and users need meaningful, easily accessible ways to opt out.”
Data protection authorities in countries like France and Ireland are reportedly reviewing Meta’s updated policies. The Irish Data Protection Commission, which oversees Meta’s European operations, has not yet issued a formal statement.
Strategic Goals Behind the Move
Meta’s renewed push for European data stems from its ambition to build more localized and culturally aware AI models. By training on European languages, dialects, and social contexts, Meta believes its AI tools will become more accurate and useful across the EU — for everything from content moderation and translation tools to generative text and image models.
In a blog post, Meta wrote, “Developing high-quality AI tools requires diverse and representative datasets. By including publicly available content from our European users, we aim to deliver more inclusive and responsive AI systems.”
What’s Next?
Meta is expected to roll out the changes in phases starting in late April 2025. Users across Europe may begin seeing new prompts about data usage, with the ability to opt out via their account settings.
Legal analysts predict potential court challenges or regulatory actions if Meta’s approach is found to stretch the boundaries of GDPR compliance. For now, the company appears set on pushing forward with its AI ambitions — even as the debate over data ownership and ethical AI grows louder.
TECH TIMES NEWS