OpenAI Team Just Shut Down a Disinformation Campaign Linked to Iran Involving ChatGPT. Here’s What to Know

As concerns persist about foreign interference in the 2024 election and the potential risks of artificial intelligence, OpenAI announced late last Friday that it had uncovered an “Iranian influence operation” using its tools to generate content aimed at spreading disinformation, particularly concerning the U.S. presidential race.

OpenAI reported that it had banned several accounts linked to the campaign and continues to monitor for additional attempts to disrupt the process. The content was created in both English and Spanish, according to the company.

“We take any use of our services in foreign influence operations very seriously,” OpenAI stated in a blog post. “As part of our efforts to support the broader community in countering this activity, we have shared threat intelligence with government, campaign, and industry stakeholders after removing the accounts. OpenAI is committed to detecting and mitigating this kind of abuse on a large scale.”

Here’s a breakdown of the group’s activities and what OpenAI has revealed about their intentions.

What were the operators doing with OpenAI?
The group, identified as Storm-2035, was producing fake news articles and social media comments to influence public opinion about Kamala Harris and Donald Trump.

Did the operators favor one candidate over the other?
Not particularly. Storm-2035 aimed to make both the hashtags #DumpTrump and #DumpKamala trend, creating stories and social media posts about both candidates.

Did any of these stories or posts gain traction?
According to OpenAI, the engagement on almost all posts from this group was minimal. The fake news sites saw little interaction, and the social media posts received few (if any) likes, comments, or shares. On the Brookings Breakout Scale, which measures impact on a 1-6 scale (with 1 being the lowest), this operation was rated at the low end of Category 2. This suggests the operation was active on multiple platforms, but there was no significant evidence that real people were engaging with it.

“The majority of social media posts we identified received few or no likes, shares, or comments. Similarly, there was no evidence of the web articles being shared across social media,” OpenAI noted.

How many articles and comments were generated?
OpenAI did not specify the exact number of articles and posts produced by the operation.

What domains were associated with the generated posts?
OpenAI revealed that Storm-2035 had produced long-form articles distributed across five websites posing as news and information outlets, all of which remain active online.

Have there been other campaigns?
Yes. Earlier this month, a report from Microsoft’s Threat Analysis Center revealed that groups connected to the Iranian government were using various methods to influence the elections. OpenAI’s investigation drew on information from that report. Iran was also linked in a Secret Service report to a plot to assassinate Trump, though this was unrelated to the earlier attempt on his life this year.

Did Storm-2035 focus exclusively on the 2024 election?
No. The group also created content related to Israel’s invasion of Gaza and Israel’s presence at the 2024 Olympics. Additionally, Storm-2035 appears to have spread disinformation about Venezuelan politics, the rights of Latinx communities in the U.S. (in both Spanish and English), and Scottish independence.

OpenAI also mentioned that “they mixed their political content with comments about fashion and beauty, possibly to seem more genuine or to build a following.”

Are other countries using OpenAI to disrupt the election?
Yes. Three months ago, OpenAI released a report indicating that it had disrupted five online campaigns attempting to manipulate public opinion using its technologies. These efforts were linked to state actors and private companies in Russia, China, and Israel, as well as Iran.

What should voters expect between now and the election?
Deepfakes have already surfaced in this election cycle, including one using AI to impersonate President Biden and discourage voting in New Hampshire, though not all claims about AI fakes are true.

While OpenAI managed to halt this attempt by Iran, it’s unlikely to be the last effort to influence voters with misinformation before November 5. A report from the Director of National Intelligence issued late last month warned of “a range of foreign actors conducting or planning influence operations targeting U.S. elections this November.” Russia and China were identified as major threats.

“Moscow continues to employ a wide array of influence tactics and actors, aiming to conceal its involvement, expand its reach, and create content that resonates more with U.S. audiences,” the report stated. “These actors are working to support a presidential candidate, influence congressional elections, undermine public confidence in the electoral process, and deepen sociopolitical divisions.”

The Electronic Frontier Foundation offers advice on spotting fake news, such as verifying the author’s identity, learning more about the source, and consulting fact-checking sites when encountering bold claims.

Latest articles