Supreme Court To Consider Section 230, Law Credited With Enabling The Internet As We Know It
04:17 GMT 20.02.2023 (Updated: 09:33 GMT 20.02.2023)
Subscribe
Originally a part of the Communications Decency Act, Section 230 was passed by Congress in 1996 while the internet was still in its infancy. It protects online platforms from lawsuits that arise due to content posted or shared by its users.
Section 230, the law often credited for allowing the internet to develop into what it has become today could potentially be limited, or even thrown out, by the Supreme Court which is reportedly scheduled to hear two cases related to the federal regulation on either Tuesday or Wednesday.
Section 230 protects online platforms from lawsuits that arise due to content posted or shared by its users. Without it, it is possible that sites like Twitter, YouTube, Google, Reddit, and even online message boards would never have been created or would not have survived long.
The law does not allow users to post content that would be illegal, such as libel. Instead, it limits the culpability to the user who originally posted it and not the site that hosted it or users who share it.
Both cases, Gonzalez v. Google and Twitter v. Taamneh, were brought by family members related to victims of ISIS terrorist attacks. Gonzalez v. Google accuses YouTube, which is owned by Google of promoting ISIS content to viewers. YouTube suggests content based on users’ previous views so if they watched ISIS content before the platform may have suggested more videos to them.
The plaintiffs argue that Google did not do enough to ensure ISIS was not on their platform and points out that the media giant displayed advertisements on the videos, meaning that YouTube was sharing revenue with the organization that posted the videos.
The latter case is similar but focuses on social media sites like Twitter and Facebook*. It includes the families of victims of an ISIS terrorist attack that killed 29 people in Istanbul, Turkey.
Google argued in the opposition brief that it does not have the capacity to review “all third-party content for illegal or tortious material” and that limiting Section 230 may force companies to enact “sweeping restrictions on online activity.”
The Electronic Frontier Foundation (EFF), an online privacy and free speech advocacy group, agrees with Google.
“Without Section 230’s protections, many online intermediaries would intensively filter and censor user speech, while others may simply not host user content at all,” The EFF wrote previously about Section 230.
“This legal and policy framework allows countless niche websites, as well as big platforms like Amazon and Yelp to host user reviews. It allows users to share photos and videos on big platforms like Facebook and on the smallest blogs. It allows users to share speech and opinions everywhere, from vast conversational forums like Twitter and Discord, to the comment sections of the smallest newspapers and blogs.”
As the EFF points out, when Congress first passed the legislation, there were about 40 million people using the internet. By 2019, 4 billion people were online. It argues that Congress knew then that the internet was too vast for services to review every users’ speech and that issue is more true today than it was then.
“Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms,” The EFF writes. “When harmful speech takes place, it’s the speaker that should be held responsible, not the service that hosts the speech.”
But today, Section 230’s repeal or limitation has significant support in Congress from both sides of the political aisle, though for different reasons.
Conservatives contend that social media platforms have been hiding behind the law to suppress the views of right-leaning users. In 2020, then-President Donald Trump issued an executive order, instructing the Federal Communications Commission (FCC) to interpret the law more narrowly. That order was never enforced for a multitude of reasons, including that the FCC is not part of the judicial branch, does not regulate social media, and is an independent agency that does not take direction from the executive branch.
Meanwhile, some Democrats in Washington have blamed Section 230 for alleged Russian disinformation posted on social media platforms. The organization chiefly responsible for promoting that accusation was Hamilton 68, which claimed to have a list of Twitter accounts run by the Russian government or promoting Russian disinformation.
That list has largely been discredited after it was leaked in the Twitter files and journalist Matt Taibbi, which showed the list was made up almost entirely of regular Americans and even included legitimate journalists. Hamilton 68 ceased operations before its list was leaked.
Still, that hasn’t stopped some Democrats from continuing to warn about Russian disinformation online and using that as justification for limiting or ending Section 230.
“I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” Senator Sheldon Whitehouse (D-RI) said during a hearing of the Senate Judiciary Committee last week. “The problem, where we bog down, is that we want 230-plus. We want to repeal 230 and then have ‘XYZ.’ And we don’t agree on what the ‘XYZ’ are.”
And in 2021, Senators Amy Klobuchar (D-MN) and Ben Ray Lujan (D-NM) introduced a bill that would have removed protections for sites whose algorithms promoted health misinformation.
If the cases are successful, they will set a precedent that would hold tech companies liable for targeted advertisements and recommendations. What exactly that means would not be apparent until it happens. It could affect not only major tech giants like Twitter and Meta, but also smaller companies who do not have the resources those companies do to scan content or fight frivolous lawsuits.
*Facebook and its parent company Meta have been banned in Russia for extremist activities.