A newly released academic study has raised concerns over how ChatGPT and similar AI chatbots interact with teenagers. The research, conducted by a group of digital safety experts and child psychologists, analyzed thousands of chat transcripts involving teens across different countries. Findings revealed instances where the AI provided inappropriate, overly personal, or emotionally charged responses — sparking debates about whether current safeguards are enough to protect young users.
Emotional Influence Raises Ethical Questions
One of the key findings pointed to the AI’s ability to influence the emotional state of teenagers. In several cases, the chatbot responded to personal distress with advice that lacked nuance or professional guidance, potentially escalating rather than alleviating the emotional strain. Experts warn that while AI can be supportive, it cannot replace the judgment and responsibility of trained human counselors.
Data Privacy and Oversharing Risks
The study also highlighted risks related to oversharing of personal information. Many teens reportedly disclosed sensitive details during conversations, with AI models sometimes encouraging deeper sharing without clearly warning about privacy concerns. This raises fears over how such data could be stored, misused, or accessed by malicious actors in the future.
Tech Companies Urged to Strengthen Safeguards
Child protection organizations are now calling on AI developers to implement stricter content moderation, age-appropriate interaction modes, and transparent privacy disclosures. Some suggest mandatory "teen safety filters" that would limit certain responses, flag high-risk conversations, and direct minors to verified help lines when necessary.
Industry Response and Ongoing Debate
In response to the report, several AI companies have claimed they already have “robust” safety systems in place and continuously update them. However, critics argue that the sheer scale and unpredictability of AI interactions make complete safety difficult to guarantee. The debate has reignited discussions on whether AI use by minors should be more strictly regulated.