New York Governor Signs RAISE Act for AI Safety Regulations

New York Governor Kathy Hochul signed the RAISE Act into law on December 20, 2025, regulating powerful AI models by mandating public disclosure of safety protocols and 72-hour incident reporting. Following California's lead, it balances innovation and oversight amid mixed reactions from tech stakeholders.
New York Governor Signs RAISE Act for AI Safety Regulations
Written by Juan Vasquez

New York’s RAISE Act: Pioneering Guardrails for the AI Frontier

New York Governor Kathy Hochul has officially signed the Responsible AI Safety and Enforcement (RAISE) Act into law, marking a significant step in state-level efforts to rein in the rapid advancement of artificial intelligence. The legislation, which took effect immediately upon signing on December 20, 2025, targets the most powerful AI models developed by major tech companies. It mandates that developers of these “frontier” systems—those trained with computing power exceeding 10^26 floating-point operations—must disclose their safety testing protocols publicly and report any critical safety incidents to state authorities within 72 hours.

This move positions New York as the second state, following California, to implement comprehensive regulations on advanced AI technologies. The RAISE Act emerged from months of negotiations, revisions, and lobbying pressures, reflecting the growing tension between innovation and oversight in the tech sector. Proponents argue it fosters transparency without stifling creativity, while critics worry it could impose burdensome requirements on an industry still in its nascent stages.

The bill’s journey to Hochul’s desk was anything but straightforward. Initially passed by the New York State Legislature in June 2025, the original version drew inspiration from California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, known as SB 1047. However, intense pushback from Big Tech firms led to substantial amendments, softening some provisions to align more closely with the California model.

Negotiations and Compromises Shape the Final Bill

According to reporting from Politico, Hochul’s administration engaged in heated discussions with industry stakeholders, ultimately opting for changes that reduced potential regulatory hurdles. For instance, the revised act eliminates earlier proposals for third-party audits and instead emphasizes self-reported safety measures, a concession that tech giants like Google and OpenAI reportedly favored.

The governor’s office highlighted the law’s focus on accountability for high-risk AI applications, such as those capable of enabling cyber attacks or generating harmful content. In a statement on the official site, Governor Kathy Hochul described the RAISE Act as “nation-leading” in establishing standards for transparency and safety, ensuring that developers prioritize public welfare alongside technological progress.

Industry insiders note that these modifications were crucial to gaining broader support. Earlier drafts faced criticism for being overly prescriptive, potentially driving AI research out of New York to more lenient jurisdictions. The final version strikes a balance by requiring companies to outline their risk mitigation strategies, including safeguards against misuse in areas like weapons development or disinformation campaigns.

Comparisons to California’s Trailblazing Efforts

New York’s approach draws heavily from California’s SB 1047, which was signed into law earlier in 2025 and set a precedent for state intervention in AI governance. Both laws target models with immense computational demands, but New York’s RAISE Act introduces unique elements, such as mandatory incident reporting timelines that are stricter in some respects. Analysts point out that while California’s law includes provisions for whistleblower protections, New York’s emphasizes public disclosure of safety frameworks, potentially offering more visibility to regulators and the public.

Coverage in The New York Times details how Big Tech’s influence led to alignments between the two states’ regulations, with companies urging harmonization to avoid a patchwork of conflicting rules. This convergence could signal the beginning of a de facto national standard, as other states observe and potentially emulate these models.

However, not all reactions have been positive. Progressive voices, as captured in The American Prospect, accuse Hochul of capitulating to corporate interests by diluting the bill’s enforcement mechanisms. The article suggests that the governor’s revisions transformed a robust oversight tool into a lighter-touch framework, raising questions about its effectiveness in preventing AI-related harms.

Industry Reactions and Economic Implications

Tech leaders have offered mixed responses to the signing. Some, including representatives from AI startups in New York City, view the RAISE Act as a reasonable compromise that provides clarity without excessive red tape. By requiring only high-threshold models to comply—those far beyond current consumer-facing AIs like ChatGPT—the law avoids overburdening smaller innovators.

On social media platforms like X, sentiment reflects a divide. Posts from users in the tech community express relief that the act isn’t as stringent as initially feared, with some praising it as a step toward responsible development. Others, however, warn of potential innovation chills, echoing concerns from earlier in the year when similar federal proposals surfaced.

Economically, New York stands to benefit from positioning itself as a hub for ethical AI. The state, home to numerous tech firms and research institutions, could attract talent and investment by demonstrating a commitment to safe practices. Yet, there’s apprehension that compliance costs might deter startups, pushing them toward states with looser regulations.

Broader Context of AI Regulation Nationwide

The RAISE Act arrives amid a flurry of AI-related legislative activity across the U.S. Just weeks prior, Hochul signed separate bills addressing AI in entertainment, including protections against deepfakes of deceased performers, as reported by Axios. These measures underscore a holistic approach to AI’s societal impacts, from creative industries to critical infrastructure.

Federally, discussions around AI safety have intensified, with figures like Senator Cynthia Lummis introducing the RISE Act of 2025 to promote innovation while ensuring transparency. Though distinct from New York’s RAISE, these efforts highlight a growing consensus on the need for oversight as AI capabilities expand exponentially.

Critics argue that state-level actions, while important, may not suffice without national coordination. Incidents like AI-generated misinformation in elections or autonomous systems failures underscore the urgency, prompting calls for federal guidelines that build on state initiatives.

Potential Challenges and Enforcement Hurdles

Implementing the RAISE Act will test New York’s regulatory apparatus. The law empowers the state’s Attorney General to enforce compliance, with penalties for non-disclosure potentially reaching civil fines. However, questions remain about how effectively the state can monitor complex AI systems, especially given the technical expertise required.

Experts anticipate legal challenges from tech companies, possibly arguing that the act oversteps into federal territory or infringes on trade secrets. Such disputes could delay full enforcement, mirroring battles over data privacy laws in recent years.

Moreover, the 72-hour reporting window for safety incidents introduces practical difficulties. Defining what constitutes a “critical” incident—such as an AI model producing biased outputs or enabling security breaches—will require clear guidelines from state officials.

Global Perspectives and Future Trajectories

Internationally, New York’s move aligns with efforts in the European Union, where the AI Act imposes risk-based regulations on high-impact technologies. This transatlantic synergy could influence global standards, pressuring companies to adopt uniform safety practices.

Looking ahead, the RAISE Act may evolve as AI technologies advance. Provisions for periodic reviews allow for updates, ensuring the law adapts to emerging risks like superintelligent systems or AI in biotechnology.

For industry insiders, the act represents a pivotal moment: a framework that encourages innovation while embedding safeguards. As one venture capitalist noted in discussions on X, it could foster a more trustworthy AI ecosystem, ultimately benefiting long-term growth.

Stakeholder Voices and Ongoing Debates

Labor unions and civil rights groups have largely welcomed the legislation, viewing it as a bulwark against AI-driven job displacement and discrimination. SAG-AFTRA, for instance, supported related bills, emphasizing protections for performers in an era of synthetic media.

Conversely, some developers express concern over the administrative burden. Reporting requirements, while not onerous for giants like Meta, could strain resources for mid-tier firms pushing the boundaries of AI research.

Debates on X reveal a spectrum of opinions, from optimism about reduced existential risks to skepticism about government overreach. These conversations underscore the act’s role in shaping public discourse on AI ethics.

Long-Term Impacts on Innovation Ecosystems

In the years to come, the RAISE Act could redefine how AI is developed in New York, potentially setting benchmarks for transparency that ripple outward. By mandating public safety protocols, it invites scrutiny from academics and watchdogs, fostering a collaborative approach to risk management.

Comparisons to historical regulations, such as those on pharmaceuticals or aviation, suggest that early oversight can prevent catastrophes while spurring safer innovations. New York’s financial sector, already leveraging AI for trading and fraud detection, may see enhanced stability through these measures.

Ultimately, the act’s success hinges on balanced enforcement. If implemented thoughtfully, it could position the state as a leader in responsible AI, blending economic vitality with societal protections.

Reflections from Policy Experts

Policy analysts, drawing from sources like TechCrunch, note that the bill’s modifications reflect pragmatic governance. Hochul’s proposals in early December, as covered in prior Axios reporting, laid the groundwork for a law that’s enforceable yet flexible.

The governor’s veto of unrelated measures, such as an LLC transparency bill mentioned in Crain’s New York Business, highlights her selective approach to business regulations, prioritizing AI as a high-stakes arena.

As AI continues to permeate daily life, New York’s RAISE Act stands as a testament to proactive policymaking, navigating the delicate balance between progress and precaution. Its unfolding legacy will likely influence not just the state, but the broader trajectory of technological governance.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us