In the heart of San Francisco’s tech corridor, a lone activist named Guido Reichstadter has captured global attention by entering the second week of a hunger strike outside the headquarters of AI company Anthropic. Reichstadter, a 31-year-old software engineer turned protester, pitches his tent daily on the sidewalk, surviving on water alone, to demand an immediate halt to the development of advanced artificial intelligence systems. His protest, which began on September 1, 2025, stems from deep-seated fears that unchecked AI could lead to existential risks for humanity, including loss of control over superintelligent machines.
Reichstadter’s action is not isolated. Across the Atlantic in London, another activist, Michael Trazzi, mirrors the demonstration outside Google DeepMind’s offices, now on day 13 of his own hunger strike as of September 14, 2025. Our original version of this article incorrectly stated that both men are part of a loosely organized group called PauseAI, which argues that the rapid pursuit of artificial general intelligence (AGI) by tech giants poses catastrophic dangers, from widespread job displacement to scenarios where AI could outsmart and overpower human oversight. However, the Executive Director of PauseAI US, Holly Elmore, reached out to WebProNews with the following statement:
Guido’s group is StopAI and Micheal is acting independently. Neither has claimed to be acting on behalf of PauseAI.
The Roots of AI Anxiety and Protester Motivations
These hunger strikes highlight a growing schism within the tech world, where optimism about AI’s potential clashes with dire warnings from ethicists and former insiders. Reichstadter, who previously worked on machine learning projects, told reporters from SFGATE that he views the current AI race as an “emergency” akin to nuclear proliferation, urging companies like Anthropic to pause frontier model training until robust safety frameworks are in place. Trazzi, a French entrepreneur with a background in AI startups, echoes this sentiment, posting on social media about the need for international treaties to regulate AI akin to those for chemical weapons.
The protests draw inspiration from historical movements, such as the anti-nuclear campaigns of the 20th century, but are amplified by recent AI milestones. In interviews shared via Futurism, Reichstadter describes how breakthroughs in models like Anthropic’s Claude series and DeepMind’s Gemini have accelerated his concerns, fearing these systems could soon achieve superhuman capabilities without adequate safeguards.
Corporate Responses and Industry Ripples
Anthropic, founded in 2021 by former OpenAI executives with a focus on “responsible” AI, has acknowledged the protests but maintained its commitment to development. A company spokesperson stated in a release covered by Business Insider that while they respect the activists’ passion, pausing research would hinder progress on aligning AI with human values. Google DeepMind, similarly, has emphasized its safety research initiatives, though neither firm has engaged directly with the strikers beyond offering water and medical checks.
The demonstrations have sparked broader industry debate. Tech leaders like OpenAI’s Sam Altman have publicly discussed AI risks, but critics argue such rhetoric often serves as marketing rather than genuine restraint. Posts on X (formerly Twitter) from users in the AI community reflect mixed sentiments, with some praising the protesters’ dedication while others dismiss them as alarmist, noting that competition with China makes unilateral pauses impractical.
Broader Implications for AI Governance
As the strikes persist, health concerns mount—Reichstadter has lost over 15 pounds, reporting dizziness, yet vows to continue until executives meet for talks. Medical experts interviewed by UNILAD Tech warn of organ damage after prolonged fasting, adding urgency to the standoff. PauseAI organizers hope this personal sacrifice will pressure policymakers, citing endorsements from figures like AI pioneer Yoshua Bengio, who has called for regulatory slowdowns.
The protests underscore a pivotal moment for the AI sector, where venture capital inflows—exceeding $50 billion in 2025 alone—fuel relentless advancement. Yet, as reported in India Today, similar actions in other cities signal a rising tide of grassroots resistance, potentially influencing upcoming global summits on AI ethics.
Echoes of Past Protests and Future Trajectories
These events evoke earlier tech backlash, such as the 2023 demonstrations against Google’s Project Nimbus, an AI contract with Israel, as documented in archived X posts from groups like Jewish Voice for Peace Bay Area. Those actions, involving die-ins and worker walkouts, pressured companies on ethical grounds, much like today’s hunger strikes aim to do for existential AI threats.
Looking ahead, industry insiders speculate that sustained pressure could lead to voluntary moratoriums or stricter regulations, especially with the U.S. and EU drafting AI laws. For now, Reichstadter and Trazzi’s resolve tests the limits of protest in an era where code could reshape civilization, forcing a reckoning between innovation’s promise and its perils. As one AI ethicist noted in a San Francisco Standard profile, “This isn’t just about hunger—it’s about starving the beast before it consumes us all.”


WebProNews is an iEntry Publication