For years, nation-state hackers have relied on sophisticated toolkits, zero-day exploits, and painstaking manual reconnaissance to penetrate targets across the globe. Now, according to new findings from Google, some of the world’s most prolific state-backed threat actors are adding a powerful new instrument to their arsenals: generative artificial intelligence. Specifically, Google’s own Gemini AI platform has become a tool of choice for advanced persistent threat (APT) groups linked to North Korea, China, Iran, and Russia — a development that underscores both the promise and peril of widely accessible AI technology.
Google’s Threat Intelligence Group (GTIG) disclosed in a detailed report that it has observed more than a dozen state-affiliated hacking groups leveraging Gemini to enhance various stages of their cyber operations, from initial reconnaissance and vulnerability research to crafting phishing lures and generating code for malicious payloads. The findings, first reported by The Hacker News, represent one of the most comprehensive public acknowledgments by a major AI provider that its own technology is being actively exploited by hostile foreign intelligence services.
North Korea’s UNC2970: The Most Aggressive AI Adopter
Among the most notable actors identified in Google’s report is UNC2970, a North Korean threat group that has drawn attention from cybersecurity researchers for its persistent targeting of defense, aerospace, and energy sector organizations. According to Google’s findings, UNC2970 has used Gemini to research potential targets, draft convincing social engineering messages, and troubleshoot technical challenges encountered during intrusion attempts. The group’s use of AI reportedly extends to generating content for fake professional profiles on platforms like LinkedIn, a tactic North Korean operatives have honed in recent years as part of elaborate recruitment-themed phishing campaigns.
North Korean APT groups, broadly speaking, emerged as the most prolific users of Gemini among the nation-state actors Google tracked. Beyond UNC2970, other DPRK-linked clusters were observed using the AI tool to research topics related to cryptocurrency — a domain of intense interest to Pyongyang, which has turned to digital asset theft as a critical revenue stream to fund its weapons programs. Some queries appeared designed to help operatives understand blockchain technologies, smart contract vulnerabilities, and methods for laundering stolen funds, according to the GTIG report.
China and Iran: Reconnaissance at Scale
Chinese APT groups, meanwhile, were observed using Gemini primarily for reconnaissance and technical research. Google identified several China-linked actors querying the AI platform for information about U.S. military and government networks, specific software vulnerabilities, and techniques for lateral movement within compromised environments. Some groups also used Gemini to assist with translating technical documents and generating scripts for post-exploitation activities — tasks that, while achievable through traditional means, can be significantly accelerated with AI assistance.
Iranian threat actors displayed a similarly broad pattern of Gemini usage. Groups tied to Iran’s Islamic Revolutionary Guard Corps (IRGC) and its intelligence apparatus were observed using the platform to draft phishing emails in multiple languages, research targets in the defense and diplomatic sectors, and explore methods for evading common security controls. Notably, some Iranian actors used Gemini to generate content for influence operations, including drafting propaganda and disinformation materials aimed at audiences in the Middle East and beyond. This dual-use pattern — combining traditional cyber espionage with information warfare — reflects a growing convergence in how state actors approach digital operations.
Russia’s Measured Approach and the Limits of AI Guardrails
Russian state-backed groups, perhaps surprisingly, appeared to be the least active users of Gemini among the four major nation-state cyber powers. Google noted that Russian actors’ interactions with the platform were relatively limited, with some groups using it primarily for coding assistance and translating content. Analysts have speculated that Russian intelligence services may prefer domestically developed AI tools or may be exercising greater operational security by avoiding Western platforms that could be monitored. It is also possible that Russian operators are more active on other commercial AI platforms not covered by Google’s analysis.
Google emphasized that none of the observed threat actors were able to use Gemini to achieve fundamentally novel attack capabilities. The company’s safety filters and content policies blocked many attempts to generate overtly malicious outputs, such as complete malware code or step-by-step exploitation guides. However, the GTIG report acknowledged that AI meaningfully lowers the barrier to entry and increases the efficiency of skilled operators. Tasks that might have taken hours of manual research — identifying an organization’s technology stack, understanding a specific CVE, or crafting a contextually appropriate spear-phishing email — can be accomplished in minutes with AI assistance.
The Productivity Multiplier That Keeps Security Leaders Up at Night
This productivity multiplier effect is precisely what concerns cybersecurity leaders across the public and private sectors. “The real risk isn’t that AI gives attackers some magical new capability,” said one senior U.S. cybersecurity official in a recent briefing. “It’s that it makes their existing operations faster, cheaper, and harder to detect.” When a North Korean operative can use AI to generate a flawless English-language email impersonating a defense recruiter, the traditional telltale signs of a phishing attempt — awkward phrasing, grammatical errors, cultural missteps — effectively vanish.
Google’s disclosure comes amid a broader industry reckoning over the security implications of generative AI. Microsoft published similar findings in early 2024, revealing that threat actors linked to Russia, China, Iran, and North Korea had used its OpenAI-powered tools for comparable purposes. OpenAI itself has acknowledged disrupting state-affiliated accounts on its platform. The emerging picture is one in which every major AI provider is grappling with the same fundamental tension: the same capabilities that make generative AI transformative for legitimate users also make it invaluable to adversaries.
Regulatory Pressure and the Industry Response
The revelations are likely to intensify regulatory scrutiny of AI providers. Lawmakers in Washington and Brussels have increasingly pressed technology companies to demonstrate that they have adequate safeguards against the misuse of AI by hostile actors. Google stated that it is continuously refining its abuse detection and content filtering systems and that it works closely with government partners to share threat intelligence. The company also noted that it has taken action to terminate accounts associated with the identified threat actors.
Yet the cat-and-mouse dynamic is unlikely to resolve neatly. Threat actors have demonstrated considerable creativity in circumventing AI safety measures, using techniques such as prompt injection, role-playing scenarios, and iterative query refinement to coax useful outputs from models that would otherwise refuse overtly malicious requests. Open-source AI models, which lack the centralized guardrails of commercial platforms, present an even thornier challenge — once a capable model is released into the wild, there is no practical way to prevent its use by adversaries.
A New Chapter in the Evolution of Cyber Threats
For corporate security teams and government defenders, the implications of Google’s findings are both concrete and strategic. On the tactical level, organizations should expect that adversaries’ phishing campaigns, social engineering attempts, and initial access operations will become more polished and harder to distinguish from legitimate communications. Security awareness training programs will need to be updated to reflect the reality that AI-generated lures may be virtually indistinguishable from authentic messages.
On the strategic level, the proliferation of AI-assisted cyber operations reinforces the need for defense-in-depth approaches that do not rely on any single point of detection. Behavioral analytics, zero-trust architectures, and robust endpoint detection and response capabilities become even more critical when the quality of adversary tradecraft improves across the board. Intelligence sharing between the public and private sectors — of the kind exemplified by Google’s GTIG report — will also be essential to staying ahead of rapidly evolving threats.
The age of AI-augmented cyber warfare is no longer theoretical. As The Hacker News noted in its coverage of the Google report, the use of Gemini by state-backed hackers represents a significant escalation in how adversaries leverage commercially available technology. The question now is not whether AI will reshape the dynamics of cyber conflict — it already has — but whether defenders can adapt quickly enough to match the pace of innovation on the other side.


WebProNews is an iEntry Publication