Published on November 8, 2023, 3:37 pm
In recent news, an undisclosed number of girls at a high school in New Jersey discovered that artificial intelligence (AI) was utilized to generate what appeared to be nude images of them. These images, created using photos of the girls on campus, were shared among some boys in group chats. As a result, an investigation is underway, with local police involvement and counseling being provided to affected students.
This disturbing incident raises the question of whether there should be a federal law against the harmful exploitation of underage victims through AI-generated content. Regrettably, there is currently no specific legislation that covers such crimes involving AI-generated nudes.
The proliferation of deepfake photos and videos is becoming increasingly prevalent. Deepfakes refer to fictionalized media that can mimic the appearance or voice of real individuals by combining their features with other imagery or audio recordings. These misleading creations can deceive viewers into believing they are genuine depictions of the targeted individuals.
According to researchers in 2019, 96% of the approximately 14,000 online deepfake videos they found were pornographic. The use of AI technology has facilitated the creation and dissemination of these manipulated explicit materials on various platforms. From high school students to popular YouTubers and celebrities, more and more individuals are falling victim to this alarming trend.
Notably, pedophiles have also taken advantage of AI technology on the internet, particularly within hidden corners like the dark web. AI-generated child pornography includes both images borrowed from known children as well as completely fabricated content based on countless digitized images available online. While existing federal child pornography laws theoretically encompass drawings and cartoons portraying explicit sexual acts involving minors, prosecutions specifically related to AI-generated child porn have not been successfully pursued.
Furthermore, even if someone were charged with producing or possessing such materials, legal definitions concerning explicit sex acts involving minors would need to be met. This poses significant challenges when attempting to protect individuals who may be portrayed semi-nude but not engaging in graphic conduct. Moreover, victims of nonconsensual AI-generated images face the lifelong consequences of having their privacy violated, impacting their prospects for education and employment.
To bridge the gap between outdated legislation and the realities of high-tech advancements, all 50 state attorneys general are urging Congress to take action. President Joe Biden has also directed his administration to explore solutions that prevent generative AI from producing child sexual abuse material or creating nonconsensual intimate imagery.
In an attempt to address the ongoing issue of AI-generated nude depictions, Representative Joe Morelle introduced the Preventing Deepfakes of Intimate Images Act. This legislative proposal is a step in the right direction toward curbing the harmful use of AI technology. Several states, including California, Texas, and Virginia, have already enacted laws that provide legal recourse for victims affected by AI-generated content. These measures range from civil lawsuits to criminalization and potential implementation of digital watermarks to trace image origins.
It is crucial for Congress and federal agencies to consider various options to tackle this problem effectively. Implementing a mandatory “deepfake” label on fabricated content, exploring legal remedies for victims, and holding software manufacturers accountable are among the potential countermeasures that should be examined. Moreover, internet platforms should work closely with verified victims, facilitating prompt removal of such content upon request.
The issue at hand is not limited to a single incident but rather represents an ongoing violation against victims each time these malicious images are accessed and disseminated. Since nine states have taken steps towards addressing this matter, it is imperative that our federal government takes swift bipartisan action before more innocent lives are subject to the devastating consequences brought about by AI misuse.
Frank Figliuzzi’s article highlights the urgent need for comprehensive legislation in response to these emerging challenges posed by generative AI. With extensive experience as an FBI special agent focused on counterintelligence and espionage investigations, Figliuzzi emphasizes the importance of protecting individuals from the harmful effects of AI-generated content. As an MSNBC columnist and national security contributor for NBC News and MSNBC, Figliuzzi calls on lawmakers to prioritize this bipartisan issue and enact preventive measures before AI technology causes further harm to innocent people’s lives.