The race to harness artificial intelligence has opened a new and dangerous front in global cyber conflict. Google’s Threat Intelligence Group has revealed that government-linked hacking units from China, Iran, Russia, and North Korea have been actively experimenting with the company’s Gemini AI assistant to enhance their cyber espionage campaigns, develop malicious code, and conduct influence operations at scale. The findings represent one of the most comprehensive public disclosures to date of how nation-state adversaries are attempting to turn commercial AI tools against the very democracies that created them.
The report, published by Google, details how Advanced Persistent Threat (APT) actors and information operation (IO) groups affiliated with more than 20 countries have attempted to use Gemini in their operations. While Google emphasized that none of the threat actors achieved breakthrough capabilities through the AI platform, the sheer breadth and persistence of their efforts underscore a troubling reality: hostile governments are systematically probing AI systems for strategic advantage, and the experimentation is only accelerating.
Iran Leads in Volume, China Leads in Sophistication
According to The Hacker News, Iranian APT actors accounted for the largest share of Gemini usage among the nation-state groups tracked by Google. More than 10 Iranian-backed groups were observed using the AI tool for a wide range of tasks, including researching defense organizations and experts, generating phishing content, creating content for influence operations, and translating and summarizing technical documents. The Iranian groups appeared particularly interested in using Gemini to craft convincing social engineering lures and to research vulnerabilities in publicly available systems.
China-linked APT groups, meanwhile, demonstrated a more technically sophisticated approach. Google’s Threat Intelligence Group identified over 20 China-backed groups using Gemini, with their queries focused on tasks such as troubleshooting code, scripting for lateral movement within compromised networks, understanding how to deepen access after an initial breach, and researching methods to evade detection. Chinese actors also used Gemini for reconnaissance on U.S. military and government institutions, translating technical documents, and understanding specific software vulnerabilities. Their queries suggested a high degree of operational maturity and a clear focus on intelligence collection against American targets.
North Korea’s Nuclear Ambitions and IT Worker Schemes
North Korean threat actors presented a uniquely multifaceted use case for Gemini. As reported by The Hacker News, DPRK-affiliated groups used the AI platform not only for conventional cyber espionage research but also to support Pyongyang’s clandestine scheme of placing IT workers in Western companies under false identities. Queries included drafting cover letters, researching job listings, and crafting explanations for professional gaps — all apparently designed to help North Korean operatives secure remote employment at technology firms in the United States and Europe.
Beyond the IT worker fraud, North Korean groups also used Gemini to research topics related to the country’s nuclear and missile programs, to gather information on cryptocurrency — a known revenue stream for the regime — and to explore technical topics related to malware development. The dual-use nature of North Korea’s AI queries illustrates how the regime’s cyber apparatus serves both intelligence and revenue-generation functions simultaneously, making it one of the most versatile and dangerous state-sponsored hacking ecosystems in the world.
Russia: Surprisingly Restrained, but Not Absent
In a somewhat unexpected finding, Russian threat actors showed comparatively limited engagement with Gemini. Google identified only a small number of Russian APT groups using the platform, and their activities were largely confined to assistance with scripting, translation, and payload crafting. Some Russian actors used Gemini to re-write or convert publicly available malware into different programming languages, a technique that can help evade signature-based detection tools.
Analysts have offered several hypotheses for Russia’s relatively low usage. Russian intelligence services may prefer domestically developed AI models to avoid exposing their operations to a U.S.-based platform. It is also possible that Russian operators are using other commercial AI tools that have fewer safeguards or that their most sensitive operations are conducted through channels not visible to Google. Regardless, the limited activity should not be interpreted as a lack of capability. Russia remains one of the most formidable cyber powers in the world, and its restraint on Gemini may simply reflect operational security discipline rather than disinterest in AI-augmented hacking.
AI as a Force Multiplier, Not a Silver Bullet
Google was careful to note that its safety mechanisms prevented Gemini from being used to generate novel zero-day exploits or create entirely new categories of malware. The company said that threat actors frequently encountered guardrails that blocked their most dangerous requests, and that many of their queries were relatively unsophisticated — amounting to what Google described as using AI for “basic research, troubleshooting, and content creation” rather than for developing genuinely new attack capabilities.
However, cybersecurity experts caution against complacency. Even if AI tools are not yet delivering transformative offensive capabilities to state-sponsored hackers, they are already serving as powerful productivity enhancers. Tasks that once required hours of manual research — identifying targets, crafting phishing emails in fluent English, translating intercepted documents, debugging exploit code — can now be accomplished in minutes with AI assistance. This efficiency gain is particularly significant for groups operating in languages other than English, as AI dramatically lowers the barrier to producing convincing social engineering content for Western targets.
The Broader Implications for AI Governance and National Security
Google’s disclosure arrives at a critical moment in the debate over AI governance and the responsibilities of technology companies in safeguarding their platforms against state-sponsored abuse. The report adds to a growing body of evidence — including similar disclosures from Microsoft and OpenAI in recent years — that hostile governments are systematically testing the boundaries of commercial AI systems. Microsoft previously reported that groups linked to Russia, China, Iran, and North Korea had used its AI services for similar purposes, and OpenAI has acknowledged disrupting state-linked influence operations on its platform.
The findings raise difficult questions about the balance between making AI widely accessible and preventing its misuse by adversarial states. Technology companies have invested heavily in safety filters and usage policies, but the cat-and-mouse dynamic between AI providers and state-sponsored hackers is inherently asymmetric. Defenders must block every possible avenue of abuse, while attackers need only find a single gap. Moreover, as open-source AI models become increasingly powerful, the ability of any single company to control misuse diminishes significantly. Threat actors who are blocked on Gemini or ChatGPT can potentially turn to open-weight models that have no usage restrictions whatsoever.
Information Operations and the Weaponization of Content
Beyond traditional cyber espionage, Google’s report highlighted the use of Gemini by state-linked information operation groups. Iranian and Chinese IO actors were observed using the platform to generate propaganda content, draft articles promoting specific geopolitical narratives, and create social media posts designed to manipulate public opinion in target countries. These activities represent a natural evolution of influence operations that have been documented since at least 2016, now supercharged by AI’s ability to produce high-volume, linguistically polished content on demand.
The use of AI for influence operations is particularly concerning because it dramatically reduces the cost and increases the scale at which disinformation can be produced. A single operator with access to a large language model can generate hundreds of unique articles, social media posts, and comments in a matter of hours — content that would previously have required a team of writers. As democratic societies grapple with the challenge of maintaining information integrity ahead of elections and during geopolitical crises, the AI-enabled industrialization of propaganda represents a formidable threat to public discourse.
What Comes Next in the AI-Cyber Arms Race
Google stated that it is using the insights from its Threat Intelligence Group’s research to strengthen Gemini’s defenses and to collaborate with government partners on countering AI-enabled threats. The company has also contributed indicators of compromise and technical details to the broader cybersecurity community to help other organizations detect and mitigate similar activity.
Yet the trajectory is clear: as AI models become more capable, the potential for misuse by state-sponsored actors will only grow. The current generation of threats — enhanced phishing, automated reconnaissance, code debugging, and propaganda generation — may represent just the opening chapter of a much longer and more dangerous story. Governments, technology companies, and the cybersecurity industry will need to develop new frameworks for cooperation, detection, and deterrence if they hope to stay ahead of adversaries who are investing heavily in turning artificial intelligence into a weapon of statecraft.
For now, Google’s report serves as both a warning and a call to action. The world’s most capable AI systems are already in the crosshairs of the world’s most capable threat actors. The question is no longer whether hostile governments will use AI for cyber operations — it is how quickly the technology will evolve, and whether defenders can adapt fast enough to meet the challenge.


WebProNews is an iEntry Publication