China’s AI Firm Targets US, Taiwan with Disinformation Campaigns

China's GoLaxy firm, linked to state entities, uses AI to amplify disinformation, targeting U.S. lawmakers and regions like Taiwan with tailored propaganda and real-time sentiment tracking. This "gray zone" tactic erodes trust globally. Amid U.S. budget cuts, experts urge reinvestment in AI defenses to counter these sophisticated threats.
China’s AI Firm Targets US, Taiwan with Disinformation Campaigns
Written by John Smart

In the shadowy realm of global information warfare, China’s deployment of artificial intelligence to amplify disinformation campaigns has reached a sophisticated new pinnacle, as revealed by recent leaks and analyses. Researchers at Vanderbilt University, in collaboration with former U.S. intelligence officials, have uncovered documents from a Chinese firm called GoLaxy that detail AI-driven operations aimed at influencing public opinion worldwide. These efforts, which blend advanced data analytics with generative AI, target everything from U.S. lawmakers to regional hotspots like Hong Kong and Taiwan, marking a shift toward what experts term “gray zone conflict” – operations that stop short of outright hostility but erode trust and sow division.

The documents, first examined by a team including Brett Goldstein, a former head of the Defense Digital Service, paint a picture of GoLaxy as a shadowy affiliate of the Chinese Academy of Sciences, founded in 2010. They describe how the company collects vast troves of data on influential Americans, including members of Congress, to craft tailored propaganda. This isn’t rudimentary fake news; it’s precision-engineered content that leverages AI to monitor sentiment in real-time and generate believable narratives at scale, making detection a Herculean task for even seasoned analysts.

The Rise of AI-Enhanced Influence Operations

GoLaxy’s toolkit, as outlined in the leaked materials, includes algorithms that automate the tracking of public opinion swings, enabling rapid response with disinformation that feels organic. For instance, during sensitive geopolitical events, the firm allegedly deploys AI to fabricate evidence, amplify emotional appeals, and hallucinate details that bolster false claims – tactics eerily reminiscent of findings in a 2023 CHI paper on large language models, though scaled to state-level ambitions. According to reporting from The New York Times, these operations align closely with Beijing’s national security priorities, even if direct government control remains unconfirmed.

This evolution comes at a precarious moment for the U.S., where resources to combat foreign disinformation are being scaled back amid budget constraints and shifting priorities. The State Department’s Global Engagement Center, once a bulwark against such threats, faces potential defunding, leaving a vacuum that adversaries like China are eager to exploit. Generative AI exacerbates the problem by producing content that’s not just voluminous but convincingly human-like, blurring the lines between truth and fabrication in ways that overwhelm fact-checkers and social media moderators.

U.S. Vulnerabilities in the Disinformation Arms Race

Vanderbilt’s revelations, presented at events like DEF CON in Las Vegas, underscore how GoLaxy collaborates with senior Chinese intelligence, party, and military officials to execute campaigns that run at unprecedented speed and precision. As detailed in a Nextgov/FCW analysis, these include efforts to “divide” regions like Taiwan through over half a million pieces of controversial messaging detected in early 2025, per Taiwanese officials cited in The Straits Times.

The timing is telling: as the U.S. grapples with its own AI ethics debates, China is integrating these technologies into everyday tools, from education to military applications. Posts on X from users like industry observers highlight Beijing’s push for AI literacy in syllabi and free access to models like DeepSeek on campuses, contrasting with American concerns over job losses from automation – over 10,000 in 2025 alone, according to aggregated tech news feeds. This disparity fuels fears that China could dominate in deploying AI for cyber operations, infiltrating systems as warned by experts in a Moneycontrol deep dive.

Global Implications and Calls for Action

The broader ecosystem of AI disinformation isn’t limited to China; Russia’s efforts, though perhaps more notorious, are being outpaced in sophistication, as noted by former NSA officials at DEF CON briefings reported in The Register. Yet, China’s model – exporting AI control technology akin to how Western firms monetize data – poses a unique totalitarian threat, as Bill Gertz articulated in a Hillsdale College speech shared widely on X. Vanderbilt’s own AI initiatives, such as the upcoming AI Days 2025 event on March 5-6, aim to foster countermeasures, bringing together experts to explore defensive strategies against such manipulations.

For industry insiders, the stakes are clear: without renewed investment in AI forensics and international cooperation, the erosion of democratic discourse could accelerate. As Axios reported just today, generative AI is making disinformation “far more effective and harder for average users to detect,” a sentiment echoed in The New York Times opinion piece calling for urgent U.S. action. The era of AI propaganda demands not just vigilance but a proactive overhaul of how we safeguard information integrity.

Strategies for Countering the Threat

Countermeasures are emerging, albeit slowly. NewsGuard’s April 2025 AI Quarterly report tracks global disinformation warfare, noting how AI models become battlefields for adversaries. Microsoft’s threat intelligence, from as early as April 2024, predicted China’s ramp-up in election disruptions using AI-generated content, a prophecy now manifesting in real-time operations. Insiders advocate for public-private partnerships, like those discussed at Vanderbilt’s AI Showcases, to develop detection tools that leverage machine learning to spot AI hallmarks in propaganda.

Ultimately, this isn’t just a tech race; it’s a battle for narrative control. As China refines tools like GoLaxy to reshape global opinion, the U.S. must reinvest in its defenses, blending policy with innovation to preserve the fragile ecosystem of truth in an increasingly synthetic world.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us