A draft of President Trump’s executive order to prevent online censorship of political speech has been leaked online. The timing of the order was prompted by Twitter “fact-checking” a Trump tweet about the pitfalls and potential for voter fraud implicit in many vote-by-mail proposals. However, Twitter, Facebook, and Google have long been accused of censoring political speech by conservatives.
Many people who have faced censorship or closure of accounts have complained for years to the government for help to no avail. This executive order would end liability protection for popular platforms like Twitter, YouTube, Google, and Facebook if they take actions against certain favored political speech, and in effect, according to the order, become a publisher instead of an unbiased platform of user-generated content.
Highlights From the Executive Order
The emergence and growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology. Today, many Americans follow the news, stay in touch with friends and family, and share their views on current events through social media and other online platforms. As a result, these platforms function in many ways as the 21st-century equivalent of the public square.
In a country that has long-cherished the freedom of expression, we cannot allow a limited number of online platforms to hand-pick the speech that Americans may access and convey online. This practice is un-American and anti-democratic. When large powerful social media companies censor opinions with which they disagree they excercise a dangerous power.
Online platforms, however, are engaging in selective censorship that is hurting our national discourse. Tens of thousands of Americans have reported, among other troubling behaviors, online platforms “flagging” content as inappropriate, even though it does not violate any stated terms of service; making unannounced and unexplained changes to policies that have the effect of disfavoring certain viewpoints; and deleting content and entire accounts with no warning, no rationale, and no recourse.
Protections Against Arbitrary Restrictions: It is the policy of the United States to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. Prominent among those rules is the immunity from liability created by section 230(c) of the Communications Decency Act. It is the policy of the United States that the scope of the immunity should be clarified.
Section 230(c) was designed to address court decisions from the early days of the Internet holding that an online platform that engaged in any editing or restriction of content posted by others thereby became itself a “publisher” of the content and could be liable for torts like defamation. As the title of section 230(c) makes clear, the provision is intended to provide liability “protection” to a provider of an interactive computer service (such as an online platform like Twitter) that engages in ‘Good Samaritan’ blocking of content when the provider deems the content obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c) (2) (A), it is engaged in editorial content. By making itself an editor of content outside the protections of subparagraph (c) (2) (A), such a provider forfeits any protection from being deemed a “publisher or speaker” under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others. It is the policy of the United States that all departments and agencies should apply section 230(c) according to the interpretation set out in this section.
Read the full draft order below: