Deepfake videos are on the rise, largely due to advanced open-source AI technology. Research from MIT indicates that a significant portion of these videos consists of non-consensual pornography, often targeting celebrities like Taylor Swift. Alarmingly, high school and middle school girls are increasingly becoming victims as well.
With the challenge of containing this wave of illicit content, the question arises: Can legislation help? John Villasenor, a professor at UCLA specializing in electrical engineering, law, and public policy, joins The Excerpt to discuss potential legislative and technological responses to this issue.
Deepfake technology isn’t new, but its accessibility has escalated. According to Villasenor, the current legislative landscape is still developing. California’s Governor Gavin Newsom has recently enacted 18 laws aimed at regulating AI, particularly concerning the creation of sexually explicit content involving minors. Additionally, San Francisco’s Attorney General has filed a lawsuit against multiple websites that enable users to generate their own pornographic content. Yet, questions remain about the effectiveness of these measures.
Villasenor highlights the dual challenges of enforcement and legal scrutiny. While the goal of these laws is to curb harmful uses of technology, they may face legal challenges due to potential unintended consequences, such as infringing on free speech.
International cooperation is crucial in this fight. Villasenor notes that existing crime-fighting frameworks could aid in addressing deepfake issues, but the anonymity of the internet complicates matters. For instance, a deepfake could involve multiple countries—where the creator, server, and subject are all located in different places—making accountability difficult.
To effectively combat deepfake pornography, Villasenor emphasizes the need for automated detection technologies. He believes that social media platforms should invest in systems capable of identifying and filtering out such content, as reputable companies would not want to host it.
For parents concerned about protecting their children from cyber sexual violence, Villasenor suggests fostering an understanding of responsible internet use. He stresses the importance of discussing the risks of sharing images online, even those that may not be explicitly explicit, as they can be easily manipulated.
Detection technologies are not foolproof, as noted in recent discussions about their limitations. Villasenor points out that the ongoing evolution of deepfake technology means that detection methods often lag behind. This creates an “arms race” where the offensive side continually develops more sophisticated techniques that can evade detection.
In political contexts, deepfakes pose a unique risk. A misleading deepfake can spread quickly, potentially influencing public perception before it can be flagged and corrected.
Villasenor observes that the conversation around deepfake porn and related issues has matured significantly in recent years, driven by heightened awareness among legislators, parents, and young people. He hopes this increased awareness will lead to improved detection technologies and a decrease in harmful content, though predicting the future of technology remains uncertain.
In summary, while legislation can play a role in combating deepfake pornography, a multi-faceted approach that includes education, technological advancements, and international cooperation is essential for meaningful progress.