Kentucky AG Leads 47-State Coalition Against Deepfake Intimate Imagery

Kentucky AG Russell Coleman leads a bipartisan coalition of 47 attorneys general urging tech platforms like Google and PayPal to implement safeguards against deepfake nonconsensual intimate imagery. They demand warnings, filters, and restrictions to prevent harm and exploitation. This initiative addresses regulatory gaps while balancing AI innovation with user safety.
Kentucky AG Leads 47-State Coalition Against Deepfake Intimate Imagery
Written by Ava Callegari

A Bipartisan Push Against Digital Deception

Kentucky Attorney General Russell Coleman has emerged as a leading voice in a nationwide effort to combat the rising tide of deepfake content, particularly nonconsensual intimate imagery that threatens personal safety and privacy. In a recent initiative, Coleman spearheaded a bipartisan coalition of 47 attorneys general, urging major search engines and payment platforms to implement stricter safeguards against the creation and dissemination of such harmful material. This call to action, detailed in reports from Yahoo News, emphasizes the urgent need for tech companies to prevent users from accessing tools and information that facilitate deepfake production, framing it as a matter of life-saving importance.

The coalition’s letter highlights how deepfakes, often generated using artificial intelligence, have been weaponized to exploit individuals, especially women and minors, through revenge porn and other forms of digital abuse. By demanding that platforms like Google and payment processors such as PayPal add warnings, filters, and restrictions—similar to those for content promoting violence—Coleman aims to disrupt the ecosystem enabling these creations. As noted in coverage by Fox 56 News, the group cites alarming statistics, including a report that 98% of fake online videos constitute deepfake nonconsensual intimate imagery, underscoring the scale of the problem.

Regulatory Gaps and Global Echoes

This push comes amid broader concerns about the inadequacy of current laws to handle AI-driven misinformation and harm. For instance, Senator Amy Klobuchar has shared personal experiences with deepfakes in an opinion piece for The New York Times, advocating for congressional action to curb increasingly realistic forgeries that could undermine elections and public trust. Coleman’s coalition builds on such sentiments, pressing for proactive measures from tech giants to label or remove deepfake content swiftly, much like mandates emerging in other countries.

Internationally, the struggle to regulate deepfakes reveals a patchwork of approaches. China’s early rules on digital forgeries, as explored in a 2023 analysis by The New York Times, set a precedent for content labeling and rapid takedowns, though free-speech concerns have slowed similar efforts elsewhere. In the U.S., states like Maryland and Massachusetts are aligning with Coleman’s efforts, with attorneys general like Anthony G. Brown and Andrea Campbell calling for enhanced protections, as reported in The MoCo Show and Mass.gov.

Industry Responses and Future Challenges

Tech platforms have faced mounting pressure, with posts found on X reflecting public sentiment around the need for better AI safeguards, including transparency in deepfake detection and content moderation. However, experts from organizations like RAND, in a commentary published on their site, caution against overhyping deepfakes as an existential threat, suggesting that current uses—often for harassment rather than widespread disinformation—should guide targeted regulations.

Coleman’s initiative also addresses payment platforms’ role in monetizing deepfake tools, urging them to deny services to sites promoting such content. This mirrors global calls, such as those from the World Economic Forum in a story on their platform, for ongoing vigilance despite deepfakes not disrupting 2024 elections as feared. As AI technology advances, insiders anticipate that without robust interventions, the proliferation of deepfakes could escalate, potentially leading to more severe societal harms.

Balancing Innovation and Safety

The coalition’s demands extend to search engines, advocating for safeguards that deter searches related to deepfake creation, as echoed in reports from The Messenger. This approach seeks to balance technological innovation with ethical boundaries, preventing misuse while fostering responsible AI development.

Ultimately, Coleman’s leadership signals a pivotal moment for U.S. policy on digital content. By uniting attorneys general across party lines, the effort underscores a consensus that platforms must prioritize user safety over unchecked growth. As debates intensify, the tech industry faces a reckoning: adapt to these calls or risk regulatory backlash that could reshape online accountability for years to come.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us