OpenAI’s platforms, including its AI image generation tool DALL-E, rejected more than 250,000 requests to create deepfakes of US election candidates, the company reported. The requests, which sought to generate images of key figures like President-elect Donald Trump, his vice-presidential pick JD Vance, current President Joe Biden, Democratic candidate Kamala Harris, and her vice-presidential nominee Tim Walz, were blocked as part of OpenAI’s safety protocols.
In a blog update shared on Friday, OpenAI explained that these rejections were part of preemptive “safety measures” implemented before the US election. The company emphasized that these safeguards are crucial in the context of elections to prevent the misuse of AI for deceptive or harmful purposes.
“These guardrails are especially important in an elections context and are a key part of our broader efforts to prevent our tools from being used inappropriately,” OpenAI’s team noted in the update. They also mentioned that, based on their analysis, there has been no evidence of widespread election-related influence operations successfully going viral on their platforms.
OpenAI previously reported disrupting an Iranian influence operation named Storm-2035 in August, which was attempting to produce politically-charged content under the guise of both conservative and progressive news outlets. Following this, accounts linked to Storm-2035 were banned from OpenAI platforms. Additionally, in October, the company revealed it had thwarted more than 20 other deceptive operations worldwide.
Despite these efforts, OpenAI’s report noted that US election-related operations attempting to exploit their platforms were unsuccessful in generating viral engagement.