Section 230 is fine. It's a decent social and legal principle. The issue is companies who both censor/ban for no/targeted/arbitrary reasons AND hide behind the liability bubble 230 provides.
The key words in the statute are "in good faith". Good faith does not cover banning someone when an employee gets their feelings hurt. Good faith does not cover what Veritas found in undercover videos about employees actively targeting conservatives. Good faith doesn't allow for FB/Twitters "MUH ALGORITHM" programming defense when a human employee created it.
What needs to happen is Twitter, YT, FB, and others who actively choose what to censor... they need to have the legal liability protections under 47 USC 230 C 2 A & 47 USC 230 C 2 B removed.
Section 230 is fine. It's a decent social and legal principle. The issue is companies who both censor/ban for no/targeted/arbitrary reasons AND hide behind the liability bubble 230 provides.
The key words in the statute are "in good faith". Good faith does not cover banning someone when an employee gets their feelings hurt. Good faith does not cover what Veritas found in undercover videos about employees actively targeting conservatives. Good faith doesn't allow for FB/Twitters "MUH ALGORITHM" programming defense when a human employee created it.
What needs to happen is Twitter, YT, FB, and others who actively choose what to censor... they need to have the legal liability protections under 47 USC 230 C 2 A & 47 USC 230 C 2 B removed.
Scott Adams talks about this at 21:10. The argument is airtight at this point.