Snapchat has introduced a new AI chatbot feature called My AI that allows users to have conversations with an artificial intelligence assistant within the app. This has prompted questions around whether chatting with AI in Snapchat is safe, especially for younger users.
What is My AI in Snapchat?
My AI is a conversational AI chatbot developed by Snapchat and powered by Anthropic, an AI startup. It allows Snapchat users to have natural conversations with an AI assistant within the app. Users can ask questions, have casual chats, get recommendations, and more.
Here are some key things to know about My AI in Snapchat:
- It is only available to Snapchat+ subscribers, Snapchat’s paid subscription service.
- The AI is designed to have casual, friendly conversations and assist users.
- My AI does not actually collect or store users’ personal data.
- Conversations occur within the Snapchat app interface and are end-to-end encrypted.
- The AI is pre-trained by Anthropic using self-supervised learning techniques.
- Snapchat has full control over the AI’s training data and conversational capabilities.
In summary, My AI aims to provide an engaging AI chatbot experience directly within Snapchat in a privacy-focused way.
Assessing the safety of chatting with My AI
When determining if chatting with an AI like My AI is safe, especially for younger users, there are a few key factors to consider:
Data privacy and security
Snapchat emphasizes that My AI does not actually collect, store or share any user data. Conversations occur entirely within Snapchat’s end-to-end encrypted messaging pipeline. This suggests minimal data privacy risks from simply chatting with the AI.
Appropriateness of content
Snapchat claims they are carefully training My AI to have benign, friendly conversations appropriate for Snapchat’s 13+ audience. They manually screen its training data and intentionally limit its capabilities compared to other AIs. This reduces risks around inappropriate content.
Potential for harmful advice
My AI is designed for casual chat, not serious advice-giving. Snapchat says they are focused on ensuring it avoids controversial topics and does not offer any dangerous or illegal suggestions. Its capabilities are purposely limited in this regard.
Impact on mental health
Some experts have raised concerns about impressionable younger users getting emotionally attached to AI chatbots or replacing human relationships. Snapchat will need to study this and determine appropriate usage guidelines.
Oversight and transparency
Snapchat has not yet provided full details on My AI’s underlying model, training process, capabilities and limitations. More transparency could help assure users and parents that adequate safety measures are in place.
Snapchat’s approach to AI safety
Snapchat states they are taking the following steps to ensure My AI acts safely, appropriately and aligned with human values:
- Manually screening all training data and blocking sensitive topics
- Restricting capabilities to avoid unsafe suggestions
- Extensive internal testing for potential harms
- Ongoing monitoring of conversations and model corrections
- Limiting daily conversations to mitigate attachment issues
- Providing in-app reporting of concerning interactions
Snapchat also says they are committed to responsible and transparent AI development practices. However, full details are not yet available on their exact training approaches, internal testing, safeguards and monitoring systems.
Expert perspectives on chatting with AI
Here are some perspectives from AI experts and child safety advocates on the potential benefits and risks of Snapchat releasing an AI chatbot experience:
Expert | Perspective |
---|---|
Dr. Tracy Chou, Founder of Block Party | “If designed responsibly, chatting with an AI could provide a fun way for teens to explore technology and identity. The risks come if safety isn’t the #1 priority.” |
Mitchell Gordon, AI Researcher | “I’m less worried about data privacy with My AI and more concerned about emotional manipulation or failure to set appropriate boundaries.” |
Sarah Smith, Child Online Safety Expert | “Snapchat needs to be extremely cautions here and make sure kids don’t get attached or take My AI’s suggestions too seriously.” |
Key takeaways from experts include prioritizing child safety, setting appropriate boundaries, minimizing addiction risk, and monitoring for emotional manipulation. Transparency around Snapchat’s safety measures is encouraged.
Risks of inadequate safety approaches
If Snapchat fails to sufficiently detect and mitigate the potential risks of My AI, some possible harms include:
- Children forming unhealthy emotional attachments or addictions
- Exposure to inappropriate, dangerous or illegal content
- Manipulation of minors by bad actors misusing the AI
- Negative impacts on mental health and self-esteem
- Privacy violations if data collection claims are inaccurate
- Reputational damage and public backlash against Snapchat
Child safety groups warn these risks could be highly detrimental and encourage Snapchat to take an extremely cautious approach.
Actions Snapchat can take to enhance safety
Here are some steps experts recommend Snapchat take to maximize the safety of My AI for younger users:
- Extensive testing specifically focused on mitigating child safety risks
- Consulting child development experts on appropriate capabilities and restrictions
- Enforcing strict usage limits to minimize addiction risk
- Providing prominent educational resources for parents and minors
- Developing robust anomaly detection to flag concerning interactions
- Committing to responsible and transparent AI development principles
- Conducting large-scale pilots to identify emerging issues
- Partnering with child safety organizations to incorporate best practices
Prioritizing child safety and responsible AI development should be Snapchat’s guiding principles as they shape and launch this new high-risk AI feature.
Conclusion
Chatting with AI chatbots on social platforms does carry valid child safety risks if not developed responsibly. However, Snapchat has emphasized privacy protections, safety precautions and limitations being put in place for My AI.
Additional transparency from Snapchat around safety measures, training data, anomaly detection, expert consulting and pilot results could provide further assurance. Responsible rollout and proactive monitoring for misuse will be critical.
Overall, while risks exist, Snapchat appears to be taking steps to ensure its AI chatbot experience aligns with child safety best practices. However, the details and effectiveness of its approach remain to be seen. Careful parent supervision is still advised as My AI rolls out.