Musk New AI Photo Tool Makes Trump, Harris, and Biden Look Real

Elon Musk’s AI chatbot, Grok, began allowing users to create AI-generated images from text prompts and share them on X, starting on Tuesday. Users quickly took advantage of the feature to flood the platform with fake images of political figures, including former President Donald Trump, Vice President Kamala Harris, and Musk himself. Some of these images depicted these public figures in obviously false and disturbing scenarios, such as being involved in the 9/11 attacks.

Unlike other mainstream AI image tools, Grok, developed by Musk’s AI startup xAI, appears to have minimal safeguards.

AI generated photo of Elon Musk
Musk New AI Photo Tool Makes Trump, Harris, and Biden Look Real 2

For instance, CNN was able to easily generate fake, photorealistic images of politicians and political candidates using Grok that, if taken out of context, could mislead voters. The tool also produced more benign but convincing images of public figures, such as Musk eating steak in a park.

Some users on X posted images claiming to show prominent figures consuming drugs, cartoon characters committing violent acts, and sexualized images of women in bikinis, all generated with Grok. One widely viewed post featured an image of Trump leaning out of a truck and firing a rifle, which CNN tests confirmed Grok could create.

This tool is likely to amplify concerns about AI’s potential to spread false or misleading information online, especially with the upcoming US presidential election. Lawmakers, civil society groups, and tech leaders have warned that misuse of such tools could cause confusion and chaos among voters.

Musk responded to the situation by posting on X, “Grok is the most fun AI in the world!” in response to a user who praised the tool for being “uncensored.”

Many leading AI companies have taken steps to prevent their AI image generation tools from being used to create political misinformation, though researchers note that users can still sometimes bypass these measures. Companies like OpenAI, Meta, and Microsoft include technology or labels to help viewers identify AI-generated images.

Social media platforms like YouTube, TikTok, Instagram, and Facebook have also implemented measures to label AI-generated content, either by using technology to detect it or asking users to identify such content.

X did not immediately respond to requests for comment on whether it has any policies against Grok generating potentially misleading images of political figures.

The platform does have a policy against sharing “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” but it’s unclear how this policy is enforced. Musk himself recently shared a video on X that used AI to make it appear as though Harris had said things she didn’t, with only a laughing emoji to indicate it was fake.

The introduction of Grok’s image tool comes amid criticism of Musk for spreading false and misleading claims on X related to the presidential election, including raising doubts about the security of voting machines. This development also follows a livestreamed conversation on X between Musk and Trump, where the former president made over 20 false claims without any challenge from Musk.

Other AI image generation tools have faced backlash for various reasons. Google paused its Gemini AI chatbot’s ability to generate images of people after it produced historically inaccurate depictions of racial groups; Meta’s AI image generator was criticized for struggling to create images of people from different racial backgrounds together. TikTok also had to withdraw an AI video tool after it was discovered that users could create realistic videos of people spreading misinformation without any labels.

Grok does seem to have some restrictions; for example, a request for a nude image was met with the response, “Unfortunately, I can’t generate that kind of image.”

In another test, the tool stated it has “limitations on creating content that promotes or could be seen as endorsing harmful stereotypes, hate speech, or misinformation.” Grok added that it is important to avoid spreading falsehoods or content that could incite hatred or division.

However, the tool did generate an image of a political figure alongside a hate speech symbol in response to a different prompt, suggesting that any existing restrictions are not consistently enforced.

Latest articles