Meta is actively seeking to reassure parents about the safety of its platforms amidst a growing trend of countries implementing bans on social media for minors. In a bid to enhance transparency, the company is introducing a feature that allows parents to view the topics their teens have discussed with Meta AI over the past week, providing insights into their online interactions.
According to a recent blog post by Meta, “Parents will be able to see the topics their teen has been asking Meta AI about in [Facebook, Messenger or Instagram] over the past week.” The topics that families can explore will cover a wide range, from School and Entertainment to Lifestyle, Travel, Writing, and Health and Wellbeing, among others, encouraging meaningful discussions between parents and teens.
For those parents monitoring their teens’ accounts on Meta platforms, this new feature will be conveniently located in an Insights tab within the supervision settings, accessible both in the app and on the web. By tapping on a specific topic, parents can delve into various categories; for example, sub-categories under Lifestyle could include fashion, food, and holidays, while the Health and Wellbeing topic encompasses fitness, physical health, and mental health.
Meta
In partnership with the Cyberbullying Research Center, Meta has created innovative “conversation starters” aimed at fostering open-ended discussions about teens’ experiences with AI. These tools are designed to guide parents and teens through important topics, and they are accessible on the Family Center website as well as through a link provided in the new Insights tab.
Additionally, Meta has disclosed more information regarding its AI Wellbeing Expert Council, which is tasked with offering ongoing insights and feedback on the AI experience for teenagers. This council will consist of three existing advisory groups along with new members possessing expertise in responsible and ethical AI, affiliated with the National Council of Suicide Prevention and various esteemed universities. Notably, Meta also has a distinct oversight board that addresses issues ranging from AI to content moderation.
Delegating moderation responsibilities to busy parents seems to be a growing trend for Meta. Recent reports indicate that the company is reducing its reliance on third-party vendors for content moderation, opting instead to employ advanced AI systems to manage these tasks effectively.
The potential dangers of AI for teenagers have prompted several countries, including Spain, to impose bans on social media platforms for younger audiences. A particularly alarming incident occurred in Canada, where a teenager received explicit instructions from OpenAI‘s ChatGPT on executing a school shooting. Another similar case is currently under investigation in Florida, highlighting the serious implications of AI involvement in multiple tragic cases of teen suicides.
If you or someone you know is in crisis, the National Suicide Prevention Lifeline in the US is available at 1-800-273-8255, or you can simply dial 988. The Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK). For those outside these countries, Wikipedia maintains a list of crisis lines for additional support.









