Google’s Threat Intelligence Group has sounded a significant alarm: government-backed hackers from at least four nations are actively exploiting the company’s own Gemini artificial intelligence chatbot to enhance the speed, scale, and sophistication of their cyber operations. The revelation, detailed in a report released by Google, underscores the uncomfortable reality that the same generative AI tools designed to boost productivity are now being co-opted by adversarial state actors to research vulnerabilities, craft phishing campaigns, and streamline the grunt work of digital espionage.
The findings mark one of the most concrete public acknowledgments by a major technology company that its AI products are being systematically misused by hostile foreign intelligence services. While cybersecurity experts have long warned that generative AI would inevitably become a tool of the trade for hackers, Google’s report provides granular detail about which nations are involved, how they are using the technology, and what guardrails have — and have not — held up under pressure.
Iran Leads the Pack, With China, North Korea, and Russia Close Behind
According to Google’s Threat Intelligence Group, advanced persistent threat (APT) actors linked to more than 20 countries have attempted to use Gemini in their operations. However, the most prolific and aggressive users hail from four nations: Iran, China, North Korea, and Russia. As Digital Trends reported, Iranian APT groups accounted for the largest share of Gemini misuse, representing roughly 75% of all identified malicious activity on the platform by state-sponsored actors.
Iranian hackers have been using Gemini for a broad range of tasks. These include researching defense organizations and intelligence experts, generating phishing content, creating content for influence operations, and translating technical documents. Google’s report noted that Iranian actors explored how Gemini could assist with reconnaissance on military and government targets in the United States and allied nations, seeking to understand organizational structures and identify potential points of vulnerability.
China’s Quiet but Methodical Exploitation
Chinese-linked APT groups, while less voluminous in their Gemini queries than their Iranian counterparts, demonstrated a methodical and technically focused approach. Google found that Chinese hackers used Gemini to assist with scripting and coding tasks, troubleshoot technical problems related to their existing tools, and research specific network penetration techniques. In several cases, Chinese actors sought help understanding how to move laterally within compromised networks — a hallmark of sophisticated espionage operations aimed at extracting sensitive data from government and corporate systems.
The use of Gemini by Chinese actors also extended to researching how to exploit specific known vulnerabilities in widely used software products, effectively using the AI as a research assistant to accelerate their exploitation timelines. This aligns with broader intelligence community assessments that Chinese cyber operations are among the most persistent and well-resourced in the world, with groups like Volt Typhoon and Salt Typhoon making headlines in recent months for infiltrating U.S. critical infrastructure and telecommunications networks.
North Korean Hackers Turn to AI for Job Fraud and Malware Development
North Korean threat actors presented a particularly unusual use case. According to Google’s findings, DPRK-linked hackers used Gemini not only for conventional cyber operations — such as researching vulnerabilities and drafting malicious code — but also to support Pyongyang’s well-documented scheme of placing covert IT workers in Western companies. These operatives, posing as legitimate freelance developers or employees, funnel their salaries back to the North Korean regime to fund weapons programs and circumvent international sanctions.
Google’s report revealed that North Korean actors used Gemini to draft cover letters, research job listings, and craft professional communications designed to help their operatives pass as qualified candidates at technology firms. They also used the AI to research salary expectations and workplace norms in target countries — a chilling illustration of how generative AI can be leveraged not just for traditional hacking but for elaborate social engineering and fraud at an industrial scale. The FBI and the U.S. Department of Justice have previously warned about this North Korean IT worker scheme, and Google’s findings suggest that AI tools are making it easier for Pyongyang to scale these operations.
Russia’s Surprisingly Restrained Approach
Perhaps the most surprising finding in Google’s report was the relatively limited use of Gemini by Russian state-backed hackers. Despite Russia’s well-established reputation for aggressive cyber operations — from the SolarWinds supply chain attack to interference in democratic elections — Russian APT groups accounted for a comparatively small share of Gemini misuse. Google noted that Russian actors primarily used the chatbot for assistance with scripting tasks and to translate or rephrase existing malicious code.
There are several possible explanations for this restraint. Russian intelligence services may have developed their own internal AI tools, reducing their reliance on commercially available platforms like Gemini. Alternatively, Russian operators may be exercising greater operational security, wary that their queries on a Google-owned platform could be monitored and attributed. It is also possible that Russian actors are more heavily utilizing other AI services, including open-source large language models that can be run locally without any oversight or usage logging.
Guardrails Tested but Not Broken — Yet
Google emphasized in its report that Gemini’s built-in safety mechanisms have, to date, prevented the most dangerous potential misuse. The company stated that threat actors attempted to use Gemini to generate malware, develop zero-day exploits, and create tools for post-exploitation activity, but that these requests were blocked by the platform’s safety filters. In other words, while hackers have found Gemini useful for research, reconnaissance, and content generation, they have not yet succeeded in turning it into a fully automated weapons factory.
However, Google’s Threat Intelligence Group was careful to note that this is an evolving situation. As Digital Trends highlighted, the hackers are continually probing the boundaries of what Gemini will and will not do, experimenting with prompt engineering techniques — including jailbreak attempts — designed to circumvent safety restrictions. Google reported that some actors tried reformulating blocked requests using different phrasing, role-playing scenarios, or multi-step prompts intended to extract restricted information incrementally. While these attempts were largely unsuccessful, the persistence of the effort suggests it is only a matter of time before more sophisticated evasion techniques emerge.
The Broader AI Security Dilemma Facing Silicon Valley
Google’s disclosure comes amid a broader reckoning in the technology industry about the dual-use nature of generative AI. OpenAI, the maker of ChatGPT, has published similar findings about state-sponsored actors attempting to exploit its platform, and Microsoft has reported comparable trends through its threat intelligence division. The pattern is consistent: every major AI provider is grappling with the reality that their tools are attractive to adversaries precisely because they are powerful, accessible, and capable of dramatically accelerating the preparatory phases of cyber operations.
The challenge for companies like Google is acute. Overly restrictive safety filters risk degrading the user experience for legitimate customers, while insufficiently robust protections could turn these platforms into force multipliers for hostile intelligence services. Google has invested heavily in its Threat Intelligence Group and in automated detection systems designed to identify and shut down accounts linked to state-sponsored abuse. But as the technology matures and open-source alternatives proliferate, the ability of any single company to serve as a meaningful chokepoint diminishes.
What This Means for Enterprises and Governments
For corporate security teams and government agencies, Google’s report serves as a pointed reminder that the threat environment is being reshaped by AI in real time. The use of Gemini by state-backed hackers to conduct reconnaissance on organizations, draft convincing phishing lures, and troubleshoot exploitation tools means that defenders must assume their adversaries are operating with AI-enhanced capabilities. Phishing emails are likely to be more grammatically polished and contextually convincing. Social engineering attacks may be better tailored to specific targets. And the time between the discovery of a vulnerability and its exploitation may continue to shrink.
Security professionals are urging organizations to invest in AI-powered defensive tools to match the pace of AI-augmented attacks. This includes deploying advanced email filtering systems capable of detecting AI-generated phishing content, implementing zero-trust network architectures, and conducting regular threat-hunting exercises that account for the possibility that adversaries are using AI to accelerate their operations. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has also emphasized the importance of information sharing between the public and private sectors to rapidly disseminate threat intelligence about AI-enabled attacks.
A New Chapter in the AI Arms Race
Google’s report is unlikely to be the last of its kind. As generative AI models become more capable and more widely available — including through open-source projects that lack any centralized safety controls — the opportunities for misuse will only multiply. The question facing policymakers, technology companies, and the cybersecurity community is not whether AI will be weaponized at greater scale, but how quickly defenses can adapt to keep pace.
For now, Google says it is committed to transparency about the threats it observes and to continuing to strengthen Gemini’s safety mechanisms. But the company’s own findings make clear that the cat-and-mouse game between AI providers and state-sponsored hackers is intensifying — and that the stakes, measured in national security and the safety of millions of users, could hardly be higher.


WebProNews is an iEntry Publication