1732
Comments (22)
sorted by:
You're viewing a single comment thread. View all comments, or full comment thread.
1
x79q3pb 1 point ago +1 / -0

So qualitative voting inclusive of, not exclusive of X accounts right? Otherwise it could be bypassed by having the bot pick accounts at random to vote.

I'd also run checks against keyword sentiment, coupled with bot behavior checks. For example when narrative is trying to be controlled and keywords are being targeted instead of selected accounts.

2
krzyzowiec 2 points ago +2 / -0

So qualitative voting inclusive of, not exclusive of X accounts right? Otherwise it could be bypassed by having the bot pick accounts at random to vote.

Yeah. You could pick them at random, but I would be looking for the voting pattern and not for any particular accounts. Meaning I'm looking at how closely say, 18 accounts happen to be voting rather than which accounts are doing it. The odds that you match someone 1:1 are impossibly small, but even before then, if you are matching at 75% or greater, I would elevate your status to "likely bot", since most people don't even read the same topics, much less vote the same. I'm not sure you could defeat that because even if you throw in some randomized up/downvoting, I'm still looking at the pattern that you and these 18 other bots have created by downvoting the same posts/comments.

You could visualize this like a graph of users with vertices weighted by how closely they follow one another. The weighting would go up depending on the number of matches. (this would work for botting AND brigading) That would be one heuristic, and then you could layer on others. The second most important one would be the time between matching responses. If the bot author is forced to space out the likes/dislikes enough to try and evade detection... well first that wouldn't fix the problem for him, but second, it almost wouldn't be botting anymore because you only care about likes/dislikes while the topic is active. (the first few days or so)

The only way I could see to avoid it would be disposable bot accounts. Like you coordinate once and then just make a bunch of new accounts. That's fine but also easily countered by banning the identity (email) and a total PITA to setup. If they are willing to go through that over and over again, they deserve to have the points.

I'd also run checks against keyword sentiment, coupled with bot behavior checks. For example when narrative is trying to be controlled and keywords are being targeted instead of selected accounts.

Oh yeah that's way more sophisticated of a system than I was thinking of. Bots are easy to detect. Trying to detect a group of people creating a narrative is far more difficult because of the risk of false positives. Like what if a group of users are memeing? Or being ironic/mocking? They would be using the same words but not with the intent you expected. That's a really difficult task and it's why they will never be able to create hate speech filters. It's just too hard when words change meaning or come into existence all the time. Imagine when something like "super straight" comes out. You have a hard enough time as a human parsing the meaning behind that if you haven't watched the video explaining it. For code it would be impossible to classify it.