Judging by recent tests I'd say Facebook's censorship algorithm is primarily image/pattern and direct link based and definitely not as automated or as far reaching as they would like people to believe. These "tests" are by no means extensive and are mostly the result of attempting to trigger FB's censorship algorithm. Tests include: numerous "far right" and "conspiratorial" keyword posts, various memes/quotes/pictures and various links. Currently all of my FB friends are Trump supporters, all posts are only visible to them and I do not interact on any larger group pages or comment threads, so manual reports are extremely unlikely.
I am absolutely no expert so take this with a grain of salt and maybe someone who knows more about how this works may find it of some use. Also, keep in mind that while most of this we here on Patriots.win are already well aware of most normies are not.
This is what I know thus far:
-
None of my plain text posts have been flagged automatically, likely meaning currently there is a relatively low chance of their advertisement based keyword logger being used for keyword based censorship. Most bans that result from text based posts seem to be the result of manual reports. This does NOT mean that this is 100% not possible, just that it likely isn't happening now. The problem with keyword censorship is without enough context it can mass flag innocent posts inadvertently drawing attention to itself in the process. I imagine coding for that sort of context to be incredibly complex if not outright impossible at this point as it approaches a nearly infinite rule set.
-
Widely distributed memes seem to get automatically flagged, but mostly through a pattern recognition algorithm which is likely being trained via machine learning through FB's photo tagging system. (Example: Goebbels quote w/pic is flagged, quote without pic is not, both are the same quote). Manipulation of said photos does not seem to do anything unless the photo becomes unrecognizable, though I'm still looking into this. Removing the photos from these memes bypasses the automated flagging system which leads me to believe that the algorithm thus far does not have the ability to recognize text via pattern recognition. Also it is important to remember, whenever you tag photos you are feeding this algorithm.
-
Certain links are automatically flagged as "misinformation", hidden and a "fact check" is attached. Most of this seems to be in response to various mainstream narratives regarding immediate current events. Covid links that provide "alternative facts", i.e. actual truth, get flagged but links regarding racism not being real do not. This leads me to believe there is a blacklist being manually curated. This can currently be bypassed by using www.archive.is to archive the webpage and posting a link to said archived webpage.
-
The time frame for automated flagging response and removal seems to be around 20 minutes after said content is posted. However, I have yet to test the response time for "fact checks", though I imagine it is around the same time frame.
Conclusion: As stated in the intro I think Facebook's current algorithm is actually less advanced and more manual then they would like people to think it is. The idea that the algorithm is fully automated gives Facebook plausible deniability when it comes to potential bias in numerous realms: politics, culture, science, etc. I imagine the algorithm is misrepresented in this way to avoid Facebook revealing that it is now far more of a publisher that actively curates content than a digital town square, which would make it vulnerable to content related lawsuits.
Judging by recent tests I'd say Facebook's censorship algorithm is primarily image/pattern and direct link based and **definitely** not as automated or as far reaching as they would like people to believe. These "tests" are by no means extensive and are mostly the result of attempting to trigger FB's censorship algorithm. Tests include: numerous "far right" and "conspiratorial" keyword posts, various memes/quotes/pictures and various links. Currently all of my FB friends are Trump supporters, all posts are only visible to them and I do not interact on any larger group pages or comment threads, so manual reports are extremely unlikely.
I am absolutely no expert so take this with a grain of salt and maybe someone who knows more about how this works may find it of some use. Also, keep in mind that while most of this we here on Patriots.win are already well aware of most normies are not.
This is what I know thus far:
1. None of my plain text posts have been flagged automatically, likely meaning currently there is a relatively low chance of their advertisement based keyword logger being used for keyword based censorship. Most bans that result from text based posts seem to be the result of manual reports. This does NOT mean that this is 100% not possible, just that it likely isn't happening now. The problem with keyword censorship is without enough context it can mass flag innocent posts inadvertently drawing attention to itself in the process. I imagine coding for that sort of context to be incredibly complex if not outright impossible at this point as it approaches a nearly infinite rule set.
2. Widely distributed memes seem to get automatically flagged, but mostly through a pattern recognition algorithm which is likely being trained via machine learning through FB's photo tagging system. (Example: Goebbels quote w/pic is flagged, quote without pic is not, both are the same quote). Manipulation of said photos does not seem to do anything unless the photo becomes unrecognizable, though I'm still looking into this. Removing the photos from these memes bypasses the automated flagging system which leads me to believe that the algorithm thus far does not have the ability to recognize text via pattern recognition. Also it is important to remember, whenever you tag photos you are feeding this algorithm.
3. Certain links are automatically flagged as "misinformation", hidden and a "fact check" is attached. Most of this seems to be in response to various mainstream narratives regarding immediate current events. Covid links that provide "alternative facts", i.e. actual truth, get flagged but links regarding racism not being real do not. This leads me to believe there is a blacklist being manually curated. This can currently be bypassed by using www.archive.is to archive the webpage and posting a link to said archived webpage.
4. The time frame for automated flagging response and removal seems to be around 20 minutes after said content is posted. However, I have yet to test the response time for "fact checks", though I imagine it is around the same time frame.
Conclusion: As stated in the intro I think Facebook's current algorithm is actually less advanced and more manual then they would like people to think it is. The idea that the algorithm is fully automated gives Facebook plausible deniability when it comes to potential bias in numerous realms: politics, culture, science, etc. I imagine the algorithm is misrepresented in this way to avoid Facebook revealing that it is now far more of a publisher that actively curates content than a digital town square, which would make it vulnerable to content related lawsuits.
LOL Now if busybody lefties who report posts could only be disabled.
I don't use Fuckbook but try running some of the flagged memes through Fawkes. It works to obscure pixels in pictures to block facial recognition. See if your memes get through.
http://sandlab.cs.uchicago.edu/fawkes/#intro