North Korean Hackers Use ChatGPT for Deepfake Military ID Phishing

Suspected North Korean hackers from the Kimsuky group used OpenAI's ChatGPT to create a deepfake South Korean military ID for phishing attacks targeting sensitive assets. This incident highlights AI's role in escalating cyber espionage, prompting calls for stricter governance and enhanced defenses against such misuse.
North Korean Hackers Use ChatGPT for Deepfake Military ID Phishing
Written by Elizabeth Morrison

In a startling revelation that underscores the evolving threats in cyber espionage, suspected North Korean hackers have leveraged OpenAI’s ChatGPT to fabricate a deepfake military identification card, targeting South Korean assets. According to cybersecurity researchers, the hackers, believed to be part of the state-sponsored group known as Kimsuky, used the AI tool to generate a convincing replica of a South Korean military ID. This incident, detailed in a recent report, highlights how generative AI is being weaponized for sophisticated phishing and infiltration operations.

The operation involved crafting a fake draft of the ID, which was then deployed in a phishing attack aimed at compromising sensitive targets. Researchers from cybersecurity firm CrowdStrike, who analyzed the breach, noted that the hackers prompted ChatGPT to assist in designing the visual elements of the forgery, blending AI-generated content with traditional hacking techniques. This marks a significant escalation in North Korea’s cyber capabilities, where AI tools are automating what were once manual, error-prone processes.

AI’s Role in State-Sponsored Hacking

North Korea’s adoption of AI like ChatGPT isn’t isolated. Earlier reports from The Korea Herald in February 2025 indicated growing concerns over Pyongyang’s use of such tools for fraud and scams, including crypto theft. In this latest case, the Kimsuky group—long associated with espionage against South Korea and its allies—employed ChatGPT to refine the deepfake ID’s details, making it appear authentic enough to deceive initial scrutiny.

The attack’s methodology involved sending phishing emails that included the forged ID as bait, potentially to extract classified information or gain network access. Bloomberg’s coverage of the event, published on September 14, 2025, emphasized that this deepfake was part of a broader campaign targeting South Korean military personnel, with the AI aiding in rapid prototyping of deceptive materials. Such tactics reduce the time and expertise needed for forgeries, allowing hackers to scale operations efficiently.

Broader Implications for Global Cybersecurity

This incident comes amid a surge in North Korean cyber activities, as evidenced by posts on X (formerly Twitter) from cybersecurity experts like ZachXBT, who in August 2025 detailed how DPRK operatives manage multiple fake identities to infiltrate companies. The use of ChatGPT here aligns with patterns observed in reports from Bloomberg, where AI is automating elements of social engineering attacks.

OpenAI has responded by monitoring and restricting accounts linked to malicious use, as outlined in their June 2025 report shutting down networks from North Korea and others. However, experts warn that as AI becomes more accessible, state actors like Kimsuky could further integrate it into ransomware or DDoS campaigns, per insights from Medium articles by cybersecurity analyst David SEHYEON Baek in August 2025.

North Korea’s Evolving Tactics and International Responses

Historically, North Korean hackers have funded regime activities through cybercrime, stealing billions in cryptocurrency, according to Chainalysis data referenced in X posts by Mario Nawfal in late 2024. The deepfake ID case builds on this, with AI enabling more targeted strikes against critical infrastructure, such as South Korea’s military systems.

South Korean authorities, in collaboration with international partners, are ramping up defenses, including AI-powered detection tools. Yet, as Fortune reported on September 14, 2025, the ease of accessing tools like ChatGPT poses ongoing challenges, prompting calls for stricter AI governance. Industry insiders argue that without global standards, such exploits will proliferate, blurring lines between digital forgery and real-world threats.

Future Risks and Mitigation Strategies

Looking ahead, the convergence of AI and hacking could extend to creating deepfake videos or voices for disinformation, as speculated in X discussions by users like Havoc in 2023, envisioning scenarios like fabricated nuclear threats. Cybersecurity firms are advocating for enhanced verification protocols, such as blockchain-based ID systems, to counter these AI-assisted forgeries.

Ultimately, this episode serves as a wake-up call for tech companies and governments. OpenAI’s proactive measures, including blocking suspicious accounts as noted in Hackread in June 2025, are steps forward, but collaborative intelligence sharing will be key to staying ahead of adaptive threats from regimes like North Korea. As AI evolves, so too must the defenses against its misuse in the shadows of global cyber warfare.

Subscribe for Updates

GenAIPro Newsletter

News, updates and trends in generative AI for the Tech and AI leaders and architects.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us