Published on November 28, 2023, 1:30 pm

Artificial Intelligence (AI) technology has become increasingly popular among British teenagers, as revealed by new research conducted by UK media regulator Ofcom. The study found that nearly 80 percent of UK teenagers have already utilized generative AI tools and services.

Ofcom’s study highlighted several key findings. It showed that four out of five online teenagers aged 13 to 17 have engaged with generative AI tools and services, with a significant minority of younger children between the ages of 7 and 12 also leveraging this technology. In contrast, adult internet users aged 16 and over seem to be more hesitant when it comes to using generative AI, with only 31 percent embracing its capabilities. Interestingly, almost one-quarter of those who have never used generative AI (69%) admit to having no knowledge about it.

Snapchat’s My AI emerged as the most popular generative AI tool among kids and teens, with half (51%) of online 7 to17-year-olds utilizing it. Furthermore, online teenage girls were found to be the most enthusiastic users, accounting for 75 percent of its usage. Meanwhile, among internet users aged 16+, ChatGPT ranked as the most widely used generative AI service, being deployed by approximately one in every four individuals (23%). Among internet users aged 7-17, boys (34%) appeared to be more avid ChatGPT users compared to girls (14%).

The study highlights various ways in which internet users aged 16+ are employing generative AI. Around 58 percent stated they use it for fun purposes, while others use it for work-related tasks (33%) or academic endeavors (25%). Additionally, the research indicates that the most popular activities include chatting and exploring AI possibilities (48%), searching for information (36%), seeking advice (22%), creative writing such as poems or song lyrics (20%), creating images (20%), videos (9%), audio files (4%), and programming tasks (11%).

Director of Strategy and Research at Ofcom, Yih-Choung Teh, noted that the adoption of new technologies is “second nature to Gen Z” but also acknowledged concerns about the potential risks associated with AI. This echoes a trend already observed in schools and universities, where it was discovered that more than 40 percent of UK universities had initiated investigations into students for cheating through the use of AI chatbots like ChatGPT. Since December 2022, there have been almost 400 investigations across 48 institutions.

Ofcom emphasized that certain AI tools will be subject to the new cybersecurity legislation. The regulator will scrutinize how companies proactively assess the security risks associated with their products and take effective measures to safeguard users from potential harm.

In recent developments, the UK’s National Cyber Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA) jointly developed global guidelines for AI cybersecurity decision-making. The “Guidelines for Secure AI System Development” are divided into four main areas: secure design, secure development, secure implementation, and secure operation and maintenance. Seventeen countries, including the US, have pledged their support for these guidelines.

The UK also hosted the AI Safety Summit in November which witnessed representatives from multiple countries signing a memorandum of understanding for enhanced cooperation in the development and regulation of artificial intelligence. The “Bletchley Declaration” underlines both the opportunities and risks related to AI technology while aiming to establish a shared scientific understanding of those risks based on evidence. In addition, signatory nations such as Brazil, Canada, China, Germany, Kenya, Saudi Arabia, and the US aim to develop risk-based policies aimed at ensuring safety in light of these challenges.

Share.

Comments are closed.