In one imagined world, Queen Elizabeth II raves about cheese puffs, a pistol-brandishing Saddam Hussein saunters into a wrestling arena, and Pope John Paul II wobbles on a skateboard.
Hyper-realistic AI videos featuring long-deceased public figures are spreading rapidly online, fueled by easy-to-use tools such as OpenAI’s Sora. Their rise has sparked urgent questions about who controls a person’s image after death - and how far digital resurrection should go.
Since its launch, the app - often labeled a deepfake factory - has inspired countless clips that bring back historical icons like Winston Churchill alongside pop culture legends including Elvis Presley and Michael Jackson.
One TikTok video reviewed by AFP shows Queen Elizabeth II, draped in pearls and wearing her crown, cruising into a wrestling event on a scooter, climbing a barrier, and leaping onto a wrestler. Other clips place her in everyday situations - from praising bright orange snack foods in a grocery aisle to playing soccer.
But the tone hasn’t always been lighthearted.
In October, OpenAI restricted the creation of content featuring Martin Luther King Jr. after his estate objected to degrading depictions. Some users had generated videos portraying King making animal noises during his “I Have a Dream” speech - an example of how AI tools let people make public figures perform actions they never did.
Experts warn this trend is moving us deeper into the “uncanny valley” - the eerie discomfort felt when something almost seems human. Constance de Saint Laurent, a professor at Maynooth University in Ireland, noted that receiving hyper-realistic videos of a deceased relative could be deeply distressing, saying such creations can have “real consequences.”
Family members of the late Robin Williams, George Carlin, and Malcolm X have publicly condemned AI-generated portrayals of their loved ones. Robin Williams’ daughter, Zelda, has pleaded for people to stop sending her synthetic videos of her father, describing the experience as infuriating.
OpenAI has said it recognizes free-speech interests in depicting historical figures but believes families and estates should ultimately have a say in how someone’s likeness is used. The company now allows representatives of recently deceased individuals to request that their image not be used in Sora.
Critics argue this doesn’t go far enough. Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, contends that releasing powerful generative tools while claiming to protect likeness rights is contradictory. He notes that even if one platform adds safeguards, other AI systems may not - meaning the problem is likely to escalate.
The risk doesn’t stop with celebrities. As these tools become more accessible, ordinary people who never lived in the spotlight could also find their names, faces, and voices co-opted without consent.
Researchers warn that the flood of synthetic media - sometimes dismissed as “AI slop” - may ultimately erode trust online. Rather than convincing everyone that fakes are real, the bigger danger may be that people stop believing anything at all.
As Saint Laurent puts it, misinformation’s greatest impact isn’t that everyone accepts it - it’s that authentic information becomes harder to trust.
Newer Articles
- Robotaxis, Safety, and the ‘Pink Tax’: Waymo’s New Year’s Eve Challenge
- High Price and Low Demand Stall Apple’s Vision Pro Ambitions
- Galaxy S26 Ultra May Gain New Pro-Level Camera Controls Despite Familiar Hardware
