Freedom of Speech & Platform Liability
Sunsetting Section 230, the decision against TikTok in the "blackout challenge" case, the charges against Telegram's CEO, and Brazil banning X.
Introduction
Today's post is about four unrelated events that all point towards the same trend: there is a global shift happening in the regulatory temperature towards the liability of social media platforms.
Some of my readers may think this is a political discussion. We can have an argument about that but in my view it’s a legal discussion. The distinction between the political “left” and “right” doesn’t make a lot of sense here, and frankly it doesn’t make a whole lot of sense in general.
If you think the internet is better left to its own device without censorship, regulation, or involvement from government that is an argument for anti-establishment, not freedom of speech. By the way, that is how I see the right vs. left debate in general, it has little to do with liberalism or conservatism anymore, it’s about whether you support the establishment, or are fundamentally against the work that governments and democratic institutions do.
Anyone wanting to run a social media network with almost no moderation and without cooperating with national police authorities, better do it anonymously, and not be a celebrity billionaire. Otherwise, they are asking for trouble. If an online platform refuses to bow and comply with take-down notices from governments, someone still has to be held accountable, and the company behind the platforms and their CEOs are next in line after the users.
That’s the legal part of the discussion but there is a moral dimension to it as well. If you own a platform that you know for a fact is used for distributing child pornography and dangerous drugs and planning terrorist attacks, do you have a moral duty to interfere and cooperate with law enforcement? I think, yes. If you don't, that automatically makes you an accomplice. Freedom of speech is not an absolute right and never has been, not in the US, or elsewhere as far as I have heard of.
Refusing to moderate a platform is just as bad, if not worse, than the other extreme which is complete surveillance and control. As always there is a balance to be struck and the EU's chat control proposal for instance is taking it too far in the opposite direction. It should be possible to have a social media platform that respects people's privacy as a default, and at the same time can cooperate with local police authorities based on valid suspicions of criminal activities. Additionally, the platform has to comply with government orders to take down illegal and/or harmful material. That is the only way social media platforms and governments can peacefully coexist.
The four events we will take a closer look at in today’s post to understand how the regulatory temperature towards online platforms is shifting:
The draft legislation to sunset Section 230 (link)
The deadly “black-out challenge” on TikTok (link)
The charges against Telegram CEO Pavel Durov (link)
Brazil blocks X (link)
The draft legislation to sunset Section 230
Section 230 (c)(1), Title 47 of the United States Code was enacted as a part of the Communications Decency Act of 1996. The provision shields social media platform providers from liability for content that is posted on the platform by its users. It has been known as “the 26 words that created the internet”:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The EU has followed the American lead on exempted liability for online platforms by enacting the Electronic Commerce Directive (the “e-Commerce Directive”) in 2000 which exempts “intermediary service providers " from liability for "mere conduit” (passively transmitting information), “’caching”, and “hosting” in Article 12-14. In effect, Article 12-14 provides a very similar protection to Section 230 (c)(1) that has now been carried over to the Digital Services Act (DSA) in Article 4-6.
The DSA requires service providers to publish a report once a year, accounting for “any content moderation that they engaged in during the relevant period” (Article 15 (1) and to abide by government orders to act against illegal content (Article 9) and orders to provide information (Article 10). The DSA sets out many burdensome obligations for “very large online platforms” - for example in terms of making thorough risk assessments and taking various organizational measures to promote fairness and transparency - but the exempted liability for user-generated content remains fully intact.
Usually, the US is leading in tech innovation, while Europe is leading in regulatory efforts. But in May 2024, two high-ranking members of the House Energy and Commerce Committee unveiled a bipartisan draft legislation to sunset Section 230. Concretely, the “Section 230 Sunset Act”, proposes a term of 18 months until 31st December 2025 for the US Congress to come up with a new legal framework. As the authors of the draft, Cathy McMorris Rodgers and Frank Pallone Jr. wrote in an op-ed for the Wallstreet Journal:
“Our measure aims to restore the internet’s intended purpose—to be a force for free expression, prosperity and innovation. It would require Big Tech and others to work with Congress over 18 months to evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms. Our bill gives Big Tech a choice: Work with Congress to ensure the internet is a safe, healthy place for good, or lose Section 230 protections entirely.”
“Sunsetting” Section 230 could fundamentally change content moderation standards and likely how social media platforms function in general. Whether it is for better or worse, is up in the air. Organizations such as the Electronic Frontier Foundation have come out strongly against the proposal, claiming that Section 230 “lays the groundwork for speech of all kinds across the internet”.
I am far from convinced that repealing Section 230 is the right thing to do. However, the mere fact that US lawmakers held a hearing about such a fundament change to internet law is a small but not insignificant sign that regulators' attitude towards the big online platforms is changing.
The deadly “black-out challenge” on TikTok
The boundaries of Section 230 (c)(1) was tested before the Supreme Court in two cases last year, Gonzalez v. Google LLC and Twitter v. Taamneh. In both cases, the plaintiffs were aggrieved families to victims of terrorist attacks who claimed that the platforms, YouTube and Twitter respectively, should have done more to protect users against propaganda and recruitment material made by the terrorist organization ISIS.
In both cases, the Supreme Court refused to comment on the scope of protection granted by Section 230 (c)(1). That means the platforms won. The Supreme Court did not find that YouTube and Twitter directly and knowingly provided assistance for the attacks (the legal standard is called “aiding and abetting” under the Anti-Terrorism Act).
A few months ago, I thought that was the end of discussion about whether the US Supreme Court would be willing to reinterpret and narrow the scope of Section 230’s protection. Luckily in my view, new legal developments have emerged since then that have reignited both speculations and hope.
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.