That's amazing. I had considered setting up a bot cloud to troll the hell out of redditors and control the narrative a bit, but never got around to it. It's pleasing to know it's so easily done though.
It wouldn't be hard to fix imo. I think they just have zero countermeasures active.
I'd set up a system of heuristics that would qualify someone as part of a bot network, like voting in concert with X number of accounts within X time period (say 24 hours) would increase your likelihood of being a bot. The more closely you followed the actions of another account (in terms of upvotes or downvotes), the higher the likelihood. (the only reason you would match 1:1 is if you were a bot) The more accounts matching the same behavior, the more I would increase it. The tighter the time interval between matching actions, the higher I would increase it. After hitting some threshold, instant ban.
It would be impossible to circumvent it. You could delay a ban by spacing out the actions over a longer time period, but eventually the pattern would be obvious.
So qualitative voting inclusive of, not exclusive of X accounts right? Otherwise it could be bypassed by having the bot pick accounts at random to vote.
I'd also run checks against keyword sentiment, coupled with bot behavior checks. For example when narrative is trying to be controlled and keywords are being targeted instead of selected accounts.
It's bots, long story short.
There is a user here that uses a bunch of different alts and a browser automation tool to mass downvote people that draw his ire.
It's happened to others before, including olds77 and maga_science.
I trolled the guy several days back to get him to do it to me so I could prove what was going on when people asked about it.
The botter hasn't let up, lol.
His bots have downvoted my comment history so hard that it reinstated the handshake icon next to my name earlier today.
It's hilarious.
That's amazing. I had considered setting up a bot cloud to troll the hell out of redditors and control the narrative a bit, but never got around to it. It's pleasing to know it's so easily done though.
It wouldn't be hard to fix imo. I think they just have zero countermeasures active.
I'd set up a system of heuristics that would qualify someone as part of a bot network, like voting in concert with X number of accounts within X time period (say 24 hours) would increase your likelihood of being a bot. The more closely you followed the actions of another account (in terms of upvotes or downvotes), the higher the likelihood. (the only reason you would match 1:1 is if you were a bot) The more accounts matching the same behavior, the more I would increase it. The tighter the time interval between matching actions, the higher I would increase it. After hitting some threshold, instant ban.
It would be impossible to circumvent it. You could delay a ban by spacing out the actions over a longer time period, but eventually the pattern would be obvious.
So qualitative voting inclusive of, not exclusive of X accounts right? Otherwise it could be bypassed by having the bot pick accounts at random to vote.
I'd also run checks against keyword sentiment, coupled with bot behavior checks. For example when narrative is trying to be controlled and keywords are being targeted instead of selected accounts.