8640
Holy Shit (twitter.com)
posted ago by Djpele12 ago by Djpele12 +8643 / -3
Comments (880)
sorted by:
You're viewing a single comment thread. View all comments, or full comment thread.
2
Crockett 2 points ago +2 / -0

The intention of 230 is actually valuable. It's effectively the same protection that a shopping mall has for the speech of people walking around in it. You can't sue the mall for another customer slandering you while you're walking around. Shopping malls act as public spaces. They kind of have to be, to fulfill their function as a place of free congregation, plus the malls have neither the interest nor capacity to monitor and control the speech of all people in the mall. A Town Square can still be a Town Square even if it's privately owned.

Once upon a time, it made sense to offer this protection as a platform to websites, because they were platforms. But two things have happened since then.

First, they have become more essential to public discourse. They have become public services. The law can compel you to grant access to someone to use a private road, if it is the only reasonable way for them to access their property. Certainly Twitter and Facebook have become places for public discourse that everyone with first amendment rights has the right to speak in. Twitter is the printing press of today.

Secondly, these companies have begun to exert editorial control over users' content. They have rules about what can and can't be said, and how things must be said, and assert full, sole, unaccountable authority to censor, suppress, and manipulate any user content they wish. And they have the technology to do so at a mass scale.

So on two fronts, the justification for Section 230 as it applies to social media has been completely undermined. But the principle is still important. Imagine an internet where a leftist could come on .win, anonymously post some libel or something, then sue the website and have the whole site taken down. For that matter, imagine a world where a restaurant can get sued by a customer that overhears another customer saying something offensive.

As for the vagueness of 230, it's partially by design. Obviously websites need some right to moderate content on their site. A) for spam, B) for illegal material, and C) because they get to determine what their platform is about. But C goes out the window once the platform positions itself as an integral layer of social, marketing, informational, and political infrastructure of the whole world. Twitter is not the equivalent of a knitting forum that doesn't want to let people talk about about scuba diving. By becoming more important and more public, they've lost their "private space" status, and they've lost their "acting in good faith" presumption by acting in bad faith.

So what to do about 230? You could argue it's an enforcement issue, and that on its face, editorializing user content makes you a publisher not a platform, therefore liable under current law. But we really need some rewording, clarification, or rewriting of the law itself, to make it explicit what differentiates platform and publisher, and what protections and privileges come with each of those statuses. Strict removal of the section would open the gates for a ton of litigation. (And that's not a bad way for this kind of statue to form, by the way. What would happen is a bunch of suits are filed, cases work their way to higher courts, and important courts, probably SCOTUS eventually, would decide what statutes work for properly separating platform and publisher. A lot of lawyers would get rich along the way, however).

It's wise of Trump to demand complete removal up front, though. It's a classic Trump tactic to ask for more than you want. Then people will freak out and say "Whoa whoa! Let's not be hasty. How about we just modify it so everyone's happy without getting a million lawyers involved?" rather than them just saying "Meh, no."