Münchener Post - After Minneapolis shooting, AI fabrications of victim and shooter

München - 4°C

IN THE NEWS

After Minneapolis shooting, AI fabrications of victim and shooter
After Minneapolis shooting, AI fabrications of victim and shooter / Photo: Kerem YUCEL - AFP

After Minneapolis shooting, AI fabrications of victim and shooter

Hours after a fatal shooting in Minneapolis by an immigration agent, AI deepfakes of the victim and the shooter flooded online platforms, underscoring the growing prevalence of what experts call "hallucinated" content after major news events.

Text size:

The victim of Wednesday's shooting, identified as 37-year-old Renee Nicole Good, was hit at point-blank range as she apparently tried to drive away from masked agents who were crowding around her Honda SUV.

AFP found dozens of posts across social media platforms, primarily the Elon Musk-owned X, in which users shared AI-generated images purporting to "unmask" the agent from the Immigration and Customs Enforcement (ICE) agency.

"We need his name," Claude Taylor, who heads the anti-Trump political action committee Mad Dog, wrote in a post on X featuring the AI images. The post racked up more than 1.3 million views.

Taylor later claimed he deleted the post after he "learned it was AI," but it was still visible to online users.

An authentic clip of the shooting, replayed by multiple media outlets, does not show any of the ICE agents with their masks off.

Many of the fabrications were created using Grok, the AI tool developed by Elon Musk's startup xAI, which has faced heavy criticism over a new "edit" feature that has unleashed a wave of sexually explicit imagery.

Some X users used Grok to digitally undress an old photo of Good smiling, as well as a new photo of her body slumped over after the shooting, generating AI images showing her in a bikini.

Another woman wrongly identified as the victim was also subjected to similar manipulation.

- 'New reality' -

Another X user posted the image of a masked officer and prompted the chatbot: "Hey @grok remove this person's face mask." Grok promptly generated a hyper-realistic image of the man without a mask.

There was no immediate comment from X. When reached by AFP, xAI replied with a terse, automated response: "Legacy Media Lies."

The viral fabrications illustrate a new digital reality in which self-proclaimed internet sleuths use widely available generative AI tools to create hyper-realistic visuals and then amplify them across social media platforms that have largely scaled back content moderation.

"Given the accessibility of advanced AI tools, it is now standard practice for actors on the internet to 'add to the story' of breaking news in ways that do not correspond to what is actually happening, often in politically partisan ways," Walter Scheirer, from the University of Notre Dame, told AFP.

"A new development has been the use of AI to 'fill in the blanks' of a story, for instance, the use of AI to 'reveal' the face of the ICE officer. This is hallucinated information."

AI tools are also increasingly used to "dehumanize victims" in the aftermath of a crisis event, Scheirer said.

One AI image portrayed the woman mistaken for Good as a water fountain, with water pouring out of a hole in her neck.

Another depicted her lying on a road, her neck under the knee of a masked agent, in a scene reminiscent of the 2020 police killing of a Black man named George Floyd in Minneapolis, which sparked nationwide racial justice protests.

AI fabrications, often amplified by partisan actors, have fueled alternate realities around recent news events, including the US capture of Venezuelan leader Nicolas Maduro and last year's assassination of conservative activist Charlie Kirk.

The AI distortions are "problematic" and are adding to the "growing pollution of our information ecosystem," Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, told AFP.

"I fear that this is our new reality," he added.

I.Frank--MP