In the rapidly evolving world of artificial intelligence, a troubling trend has emerged: the use of deepfake technology to resurrect the images and voices of the deceased. This practice raises profound ethical questions, even as legal protections lag behind. A recent piece in TechCrunch highlights how companies are creating AI-generated replicas of historical figures and everyday people who have passed away, often without clear consent from estates or families. The article argues that while you can’t legally libel the dead— a principle rooted in common law— exploiting their likeness through deepfakes crosses into moral territory that demands scrutiny.
Technologists and ethicists alike are grappling with the implications. For instance, deepfakes can manipulate videos to make it appear as if a deceased person is speaking or acting in ways they never did, potentially distorting history or personal legacies. The TechCrunch coverage of ByteDance’s OmniHuman-1 system underscores how advanced these tools have become, generating convincingly lifelike videos that blur the line between reality and fabrication.
Ethical Boundaries in AI Resurrection
Public sentiment, as reflected in various online discussions, reveals widespread discomfort. Posts on X (formerly Twitter) from users like researchers and filmmakers express outrage over the lack of consent, with one noting that deepfaking deceased loved ones for engagement feels inherently malicious. This echoes broader concerns about psychological harm, where families might encounter gory or unauthorized recreations of their relatives’ final moments, as detailed in reports from ScamWatchHQ.
Legally, the terrain is uneven. California’s recent law, as reported by IndieWire on X, prohibits AI replicas of performers’ likenesses without consent, extending protections to the dead. Yet, federal frameworks remain patchy. The DEEPFAKES Accountability Act, discussed in an older TechCrunch analysis, proposed watermarking and disclosure requirements, but enforcement challenges persist, especially across borders.
Technological Advancements and Risks
Advancements in generative AI exacerbate these issues. Tools now flood the web with disinformation, as explored in a TechCrunch session at Disrupt 2024, where experts from the Center for Countering Digital Hate warned of state actors exploiting deepfakes for propaganda. When applied to the deceased, this could rewrite narratives— imagine a deepfaked historical leader endorsing modern ideologies they never supported.
Industry insiders point to scams as a dark underbelly. A ThreatVirus report details how deepfake cyber attacks in 2025 have led to multimillion-dollar frauds, including scams using Elon Musk’s likeness, though extending this to the dead amplifies emotional manipulation. Victims like retirees have lost fortunes to AI-generated endorsements from figures presumed trustworthy, even posthumously.
Regulatory and Industry Responses
Calls for regulation are growing louder. A bibliometric analysis in MDPI‘s journal on open access publishing traces a decade of deepfake research, emphasizing ethical lapses in generative AI. Nonprofits tracking deepfakes, per TechCrunch, note that while election interference grabs headlines, personal exploitations like resurrecting the dead for profit or virality pose subtler threats.
Companies have stumbled too. TechCrunch exposed Yepic AI’s broken promise not to deepfake without consent, highlighting the need for self-imposed industry standards. As one X post from a developer urged, real-time impersonation risks demand government oversight, akin to crypto regulations.
Toward a Balanced Future
For industry leaders, the path forward involves balancing innovation with responsibility. Ethical AI frameworks, perhaps inspired by California’s model, could mandate consent from heirs and transparent labeling of deepfakes. Without them, the technology risks eroding trust in digital media entirely.
Ultimately, while deepfaking the dead may not constitute libel, it challenges our societal values. As AI evolves, insiders must advocate for protections that honor the dignity of those no longer here to defend themselves, ensuring technology serves humanity rather than exploiting its vulnerabilities.