I'm just a random dude and definitely not an expert on this topic but it seems to me the issue with big tech and censorship comes down to this debate:
Side A says: freedom of speech/expression/press is a fundamental right. When companies decide to censor certain ideas or groups then it can become politicised and a tactic of control. Obviously a valid point.
Side B says: Bad ideas/content exists (or I'm either offended by it or don't want to see it) so we shouldn't promote these things and therefore censorship is justified. For example: I don't want to see gore or don't want my children to see porn. Also a valid point.
Section 230 gives some idea as to it's intent: (b)Policy It is the policy of the United States— (1)to promote the continued development of the Internet and other interactive computer services and other interactive media; (2)to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation; (3)to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services; (4)to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
The key point being: "maximize user control over what information is received".
The problem is the "technology" that has been developed are the AI systems that big tech uses to monitor and censor certain content. The systems along with the companies have the control to censor information not the user. This is where the problem lies.
To me the solution lies in requiring tech companies to allow the user to control what is censored not some computer AI or woke tech CEO. Don't want to hear about Trump? No problem - turn off that setting in your feed profile. That would require these tech companies to show the user how they are cataloging and organizing ideas but leaves the person control. They would need to choose their own brainwashing - the responsibility is now on the individual to decide what they think (imagine that)!
Also, there needs to be some sort of "interactive internet service" user bill of rights which would require that if your site accepts user generated content then that content must be treated the same. No shadow-banning certain content or not allowing certain hashtags to trend. H.R.492 - Biased Algorithm Deterrence Act of 2019 https://www.congress.gov/bill/116th-congress/house-bill/492 was heading in this direction already. If you don't like an idea or hashtag then you you as the user needs to decide that and remove it from your feed. If your site or platform does filter content, then it must be disclosed what ideas/hashtags are censored and in what way. The user needs to understand this and have control. Without user control then the information you receive is being controlled by a machine or someone else.
Bottom line is: users should have the right both to say what they want to as well as decide what information they receive.
These are just some random thoughts - so I'm interested to know why this would or would not work and what are some practical solutions?