Published on November 16, 2023, 6:55 pm

An independent review by Common Sense Media, a nonprofit advocacy group for families, has raised concerns about the safety of popular AI tools for kids. The organization has recently launched a ratings system that includes “nutrition labels” for AI products, such as chatbots and image generators. The aim is to help parents evaluate the suitability of these products for their children. The review found that generative AI models, including Snapchat’s My AI and DALL-E, had significant risks such as reinforcing unfair biases and storing personal user data.

Tracy Pizzo-Frey, Senior Advisor of AI at Common Sense Media, emphasized that generative AI models are not always correct or values-neutral due to the vast amount of internet data they are trained on. These models often host biases related to cultural, racial, socioeconomic, historical, and gender factors. Pizzo-Frey hopes that these ratings will encourage developers to build protections against misinformation and unintended repercussions.

In TechCrunch’s tests, reporter Amanda Silberling found that Snapchat’s My AI generative AI features were generally more weird than actively harmful. However, Common Sense Media still gave it a low 2-star rating due to some responses that reinforced unfair biases and inappropriate content. Snap responded to the rating by highlighting that My AI is an optional tool with limitations clearly communicated to users and parents.

Other generative AI models like DALL-E and Stable Diffusion were also flagged for potential risks like objectification and sexualization of women as well as reinforcement of gender stereotypes. Additionally, these models have been misused to create pornographic materials using celebrities’ likenesses.

In the mid-tier of ratings were AI chatbots such as Google’s Bard and ChatGPT. While they had fewer concerns than generative AI models, bias could still occur in these bots—particularly towards users with diverse backgrounds. They could also produce inaccurate information or reinforce stereotypes.

The only products receiving positive reviews from Common Sense Media were AI products designed for educational purposes, including Ello’s AI reading tutor and book delivery service, Khanmingo from Khan Academy, and Kyron Learning’s AI tutor. These products prioritize responsible AI practices, fairness, diverse representation, and kid-friendly design.

Common Sense Media plans to continue publishing ratings and reviews of new AI products to inform not only parents and families but also lawmakers and regulators. The organization believes that clear “nutrition labels” for AI products are necessary to protect the safety and privacy of children, as well as all Americans. Without proper regulation, the unregulated atmosphere surrounding AI could lead to data privacy concerns and harm our democracy.

It is vital for parents to be aware of the potential risks associated with AI tools used by their children. By understanding the limitations, biases, and ethical considerations of these products, parents can make informed decisions about their appropriateness for their children’s usage.


Comments are closed.