Algorithms on Trial: Inside the Push to Overhaul Section 230 and Hold Tech Giants Accountable

Senators John Curtis and Mark Kelly's Algorithm Accountability Act seeks to reform Section 230, allowing lawsuits against social media for harmful algorithmic recommendations. This bipartisan bill addresses radicalization and misinformation, potentially reshaping tech liability. Amid AI advancements, it promises greater platform accountability.
Algorithms on Trial: Inside the Push to Overhaul Section 230 and Hold Tech Giants Accountable
Written by John Marshall

WASHINGTON—In a bipartisan move that could reshape the digital landscape, Senators John Curtis (R-Utah) and Mark Kelly (D-Ariz.) have introduced the Algorithm Accountability Act, a bill aimed at piercing the long-standing liability shield provided by Section 230 of the Communications Decency Act. This legislation, unveiled amid growing concerns over social media’s role in amplifying harmful content, seeks to make tech platforms responsible for the algorithmic decisions that push dangerous material to users.

The bill targets the heart of how platforms like Facebook, YouTube, and TikTok curate content. Under current law, Section 230 grants broad immunity to online services for user-generated content, but the new proposal would carve out exceptions when algorithms actively promote harmful posts, such as those inciting violence or radicalization. Proponents argue this reform is essential to combat real-world harms, from election interference to mental health crises among young users.

Introduced on November 19, 2025, the Algorithm Accountability Act arrives at a pivotal moment. With the incoming administration signaling tougher stances on Big Tech, the bill reflects a rare cross-aisle consensus on the need for accountability. Curtis, a conservative known for his tech-savvy approach, and Kelly, a former astronaut with a focus on national security, frame the legislation as a targeted fix rather than a wholesale repeal of Section 230.

The Origins of Section 230 and Its Evolving Controversies

Section 230, enacted in 1996 as part of the Communications Decency Act, was designed to foster the growth of the internet by protecting platforms from lawsuits over third-party content. As explained by the Electronic Privacy Information Center (EPIC) in their analysis on platform accountability, the law’s core provision states that ‘no provider or user of an interactive computer service shall be held liable’ for content posted by others, while also allowing moderation without liability.

Over the years, this shield has been both praised and criticized. Tech companies credit it for enabling free expression and innovation, but critics, including lawmakers from both parties, argue it has allowed platforms to evade responsibility for algorithmic amplification of misinformation, hate speech, and extremism. A 2024 ruling by the U.S. Court of Appeals for the Third Circuit, detailed in a post on TeachPrivacy, highlighted limits to Section 230, suggesting that algorithmic decisions could fall outside its protections.

Bipartisan Momentum Builds for Reform

The Algorithm Accountability Act builds on prior efforts, such as the 2023 version introduced by Senators Ron Wyden (D-Ore.) and others, which focused on requiring companies to assess AI impacts, according to a summary from Wyden’s office. The new bill goes further by explicitly tying liability to harmful algorithmic recommendations.

Senator Curtis emphasized the need for change in a statement: ‘Algorithms are not neutral—they make decisions that can have profound impacts on society,’ as reported by The Verge. Kelly added that the bill would ‘ensure platforms can’t hide behind outdated laws while their algorithms push dangerous content.’

Recent news underscores the urgency. Posts on X (formerly Twitter) from users like @Deseret and @verge, dated November 2025, highlight public sentiment favoring reform, with one noting the bill’s potential to address radicalization through boosted content.

How the Bill Would Change the Game

At its core, the Algorithm Accountability Act would amend Section 230 to remove immunity when a platform’s algorithm recommends content that causes harm, such as inciting violence or contributing to mental health issues. This is distinct from mere hosting of content, focusing instead on the active role of recommendation systems.

According to details from Congress.gov on the 2023 precursor bill, companies would need to conduct impact assessments for automated decision-making tools. The 2025 version expands this to allow users to sue directly, potentially opening the floodgates for litigation against tech giants.

Industry insiders warn of ripple effects. A report from the American Action Forum on Section 230 in the AI era, published October 16, 2025, questions how such reforms might apply to generative AI, which creates content based on prompts.

Tech Industry Pushback and Legal Challenges

Big Tech has long defended Section 230 as essential. Meta (formerly Facebook) and Google have argued that without it, innovation would stifle under legal threats. In a 2022 press release from Senator Wyden’s office on the Algorithmic Accountability Act of 2022, supporters noted the need for transparency in automated systems.

However, opposition is mounting. X posts from figures like Vivek Ramaswamy in 2024 criticized related rulings, such as Murthy v. Missouri, for allowing government influence over platforms without accountability. A December 2022 article in The Hill quoted Senator Josh Hawley (R-Mo.) saying the shield ‘allows platforms to escape any real accountability.’

Broader Implications for AI and Content Moderation

The bill’s focus on algorithms intersects with AI governance. A Harvard Law Review piece from April 2025, titled Beyond Section 230: Principles for AI Governance, argues for principles that hold AI systems accountable, echoing the act’s intent.

White House support for similar reforms was signaled in a 2022 proposal covered by the Washington Examiner, which backed overhauls including algorithm transparency.

Experts predict court battles. A 2021 WIRED article on Congress targeting algorithms noted that reform proposals often falter on details, but the bipartisan nature of this bill could propel it forward.

Voices from the Ground: Public and Expert Reactions

Reactions on X reflect a mix of enthusiasm and skepticism. Posts from November 2025, such as those from @Entangle2030 and @BrighamTomco, praise the bill for challenging Big Tech’s defenses, with one user urging it be prioritized to tackle algorithms not covered by Section 230.

A 2023 X post from @TexasLindsay_ celebrated a judicial finding that Section 230 unconstitutionally protects platforms as publishers. Meanwhile, a February 2025 post from NEWSMAX reported FCC Chair Brendan Carr’s plans to weaken Section 230 protections.

Potential Outcomes and Future Pathways

If passed, the act could force platforms to redesign algorithms, prioritizing safety over engagement. This might lead to more conservative content curation, potentially reducing viral misinformation but raising free speech concerns.

Comparisons to past reforms, like the 2022 introduction covered by Wyden’s office, show evolution toward stricter accountability. As AI advances, this bill could set precedents for global regulations.

Industry watchers, per a BizToc article from three days ago on Section 230 loopholes, argue it shouldn’t preclude liability for harmful content pushes.

Navigating the Political Landscape

With the 119th Congress underway, the bill’s fate hinges on committee hearings. Supporters hope for swift action, but tech lobbying could delay it.

Historical context from a 2021 X post by @Porter_Anderson referenced CNN’s Elie Honig discussing Section 230 as a ‘big-ticket item’ for platforms fearing liability for algorithms.

As debates intensify, the Algorithm Accountability Act represents a critical juncture in balancing innovation with responsibility in the digital age.

Subscribe for Updates

SocialMediaNews Newsletter

News and insights for social media leaders, marketers and decision makers.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us