Lawmakers Ramp up Attacks on Section 230: No Matter Who Wins the Election, Freedom of Speech May Lose
By A. Mackenna White
As a potential administration change approaches the White House, the Department of Justice recently pushed forward President Trump’s agenda against one of his favorite targets—social media—submitting draft legislation to Congress seeking to strip online service providers of many of the protections currently afforded to them under Section 230 of the Communications Decency Act. The draft legislation comes as no surprise. It follows months of threats of government action against social media platforms like Facebook and Twitter that, much to Trump’s chagrin, recently began labeling posts containing misleading, false, and altered content. As worrisome and personally motivated as the proposal appears, Trump and the Department of Justice are not alone. This year has seen a push for reform of Section 230 from both sides of the aisle. While the parties’ positions on what and how regulation should change differ greatly, they agree that Section 230 is overly broad and has allowed platform providers to amass an undesirable amount of control over the content available on the internet.
The Communications Decency Act was signed into law by President Clinton in 1996. Originally designed to regulate pornographic and other indecent material on the internet, the act contains an important carve-out known as Section 230. Section 230 distinguishes between those who create online content, “information content providers,” and the “online service providers” who provide access to platforms to post content but do not create the content themselves. Under Section 230, service providers are not considered publishers of third-party generated content and, therefore, cannot be held legally liable for words posted by their users. Section 230 also shields service providers from liability for “good faith” measures taken to regulate content that they deem unlawful, objectionable, or otherwise in violation of their terms of service, even if such content is considered lawful speech. As currently written, the Communications Decency Act allows providers wide discretion to define “objectionable.” Underlying the debate over Section 230 reform is the political parties’ ideological disagreement over which of these two protections is the root of the problem.
Conservative proposals attack the latter prong, which they assert has allowed service providers to exercise ideological biases to unfairly target and censor right-wing content.[1] The DOJ’s proposed amendment, among other things, specifically eliminates service providers’ ability to restrict access to content they deem “otherwise objectionable” and imposes a strict definition of “good faith,” which Congress originally declined to do. Under the new regulations, service providers would no longer be able to restrict content based on “deceptive or pretextual grounds,” meaning, presumably, blocked content providers could bring actions claiming that the removal of their content was for ideological reasons and the stated reason was pretextual or false. Another proposed change would prevent blocking of content that is “similarly situated” to unrestricted content—or, more transparently, service providers would have to equally restrict conservative and liberal content on any particular topic. Take, for example, a scenario where a service provider restricted access to a post claiming that masks are unnecessary to prevent the spread of COVID, as Twitter did with Donald Trump Jr.’s account in late July. Under the proposed new regulations, Twitter may be compelled to likewise restrict tweets advocating that masks do prevent the spread of the virus as “similarly situated” content, even if the platform (not to mention the CDC and the majority of the medical community) believe that the first post contains misinformation but the second does not. Either of the above proposed changes would engender massive litigation by any content provider seeking access to the platform and would require the courts or an administrative judge (ironic given the conservative view of the administrative process and the deep state) to pass judgment on the content of materials, something difficult to square with First Amendment jurisprudence. Republican lawmakers have introduced at least four other bills this year containing similar amendments.
Liberals, on the other hand, are focused on the first prong of Section 230. They argue that service providers should not enjoy immunity from liability for hate speech and misinformation that may flow through their sites. In an interview with the New York Times in January, Joe Biden called for an “immediate” revocation of Section 230’s exemption from liability. According to the former vice president and other Democratic lawmakers, it is “irresponsible” for the law to shield internet service providers from the legal repercussions of such content. Democrats have yet to formally propose who should ultimately determine what qualifies as misinformation, falsehoods, and hate speech. No doubt their amendments would support restricting access to conservative viewpoints on contentious topics such as COVID, election fraud, and Black Lives Matter, but their position on misinformation posted by their own party has been inconsistent. Representative Alexandria Ocasio-Cortez, for example, strongly criticized the New York Times for its publication of an opinion piece by Republican Senator Tom Cotton that liberals deemed misinformation. Yet, when caught tweeting her own misinformation in early 2019, she criticized that fact-checkers were more concerned with being factually correct than morally right. Would a liberal amendment to Section 230 apply equally to all misinformation? Or – more likely – become yet another exercise in party politics? The answers to those questions could redefine the freedom of the internet as we know it.
Both parties’ criticisms of Section 230 suffer from palpable partisan motivations, flaws, and ambiguities. They likewise both seek to walk back the legislative intent clearly set forth by Congress in the original act to “promote the continued development” of online platforms and resources and preserve the “vibrant and competitive free market…unfettered by Federal or State regulation.” 47 U.S.C. § 230(b)(1) and (2). Moreover, both parties have aimed their proposals and calls for revision at social media platforms—calling out Twitter, Facebook, and Instagram by name—but fail to acknowledge the farther-reaching effects of the proposed regulation. Section 230’s definition of an online service provider encompasses any operator providing a platform for user content. This definition includes service providers that stream users’ chats and comments, a communal interaction on which many platforms rely to appeal to users, and platforms such as Wikipedia that serve as a valuable source of online information almost wholly provided by users. Thus, the amendment as contemplated would provide potential liability to, for example, chat hosting services for the content of their users’ online chats, texts, and blog posts, and any platform allowing users to comment on the platform or other users’ content.
Compliance with either set of revisions would be expensive and onerous to service and content providers alike. The revisions call on service providers to not only undertake extensive review of everything published on their sites but also potentially hold them responsible for identifying what is or is not misinformation, such as foreign election interference, a sophistication far beyond the capability of most service providers. A redefinition of hate speech and misinformation would also trickle down to content providers. Service providers would be required to review all content posted by their users, delaying users’ ability to comment in real time and, likewise, stifling their freedom of speech and the exchange of information and ideas online. Both platforms and users would face increased civil and criminal liability for posted content, or alternatively for their restrictions (or lack of restrictions), based on unclear and arbitrary standards. Finally, it is difficult to square any of these proposals with either the real-world flow of information and communication over the internet or the First Amendment.
What is clear is that regardless of the outcome in November, all online platforms publishing moderated or unmoderated third-party content should prepare for efforts to limit Section 230 and should evaluate the effects of these potential new obligations, liabilities, and costs on their business.
For further information please contact:
- A. Mackenna White at mackenna.white@lbkmlaw.com or +1.646.965.8828
- Adam S. Kaufmann at adam.kaufmann@lbkmlaw.com or +1.212.826.7001
The foregoing is for informational purposes only. It is not intended as legal advice and no attorney-client relationship is formed by the provision of this information.
[1] Just one day after leaving the hospital, Trump renewed his attack on Section 230 in response to Twitter’s labeling his tweets about the effects of COVID as potentially misleading. Both Facebook and Google recently announced policies blocking political ads after polls close on November 3rd, a move that will certainly draw further ire and retribution from the Trump administration and conservative lawmakers.