3864
Comments (240)
sorted by:
You're viewing a single comment thread. View all comments, or full comment thread.
10
mintscape 10 points ago +10 / -0

Anycast is just advertising the same route out of multiple locations, it's an old trick for faster DNS response and for CDNs with static content and was in use long before it got the name Anycast.

For a dynamic service like T_D that operates on a single database, anycast can just not work. You would need to lock and synchronize the databases for every write from every location. It's impossible to maintain state without a huge impractical overhead that would make things unusable.

This why even places like Reddit handle their platform with load balancers on the front end and a huge database cluster.

I used to work for Cisco, I have done a huge amount of networking and worked on ios for them, I know what I am talking about my pede.

I get the feeling you actually don't know how Anycast works, it's just advertising the same route from different locations, nothing more. I'll explain.

Say I have a site in San Fran and I have a server 8.8.8.8, I advertise to my BGP peers 8.8.8.0/24. So there exists a single route to that server in the global routing table. Now people in Australia are using my server but complain it is too slow, so I add a new server with the same content and same IP address in Australia and I also advertise 8.8.8.0/24 from my Australian site to my BGP peers there.

So what happens? The global routing table ends up with two destinations to 8.8.8.8, what's the next hop? The route with the lowest cost. So Routers closer to Australia will pick the path to the Australian 8.8.8.8 server and those closest to San Fran will pick the path to the San Fran server. If one of those sites has a power outage, then I also get fail over automatically to the next closest server. I can keep adding locations and 8.8.8.8 servers all over the world if I like.

That's it, that's Anycast. It should now be clear why that can only work with sites or data that is static or gets updated very infrequently. It should be obvious that something like a dynamically built forum with hundreds of writes per minutes isn't suitable for Anycast. It is very common for reverse proxies to be on anycast addresses, this is what has people confused, as a reverse proxy just sends traffic to the real server.

3
AussieTrumpSupporter [S] 3 points ago +5 / -2

For the database for sure. I'm talking static content though. A DDoS often it won't even get to the database layer, it will just be flooding SYN packets. My point was moreso the fact that it would be hard for TD to build their own Cloudflare especially when they already have all the other complexities they deal with.

5
deleted 5 points ago +5 / -0
2
HocusLocus 2 points ago +2 / -0

So (bless you for taking your time) how is this present level of Cloudflare working? It seems as if I -- a user who has logged on maybe a dozen times in TDW's history, so my session cookies are very long lived and I've never hit TDW with any script-like behavior -- when I go into total bicycle mode I am forced to conclude my session cookies are NOT being whitelisted at the front end.

Does whitelisting strings given to Cloudflare, such as the small gobblegook in my (valid, real human) session ID, cost extra money or is a limited resource?

From how you describe it Cloudflare is trying to identify sessions to blacklist. The DDoSers have thousands of zombie sessions and they start to issue (slow for each but lots en masse) zombie clicks and from what I've learned, resource intensive ops like searches, at the same time?

Mods are cagey to discuss it but as an old IT Boomer who invented the Internet (it wuz me I swear) I'm curious about how it's being done with a thought to better (especially cheap) countermeasures.

3
deleted 3 points ago +3 / -0
2
AussieTrumpSupporter [S] 2 points ago +2 / -0

That makes sense. So the difficulty of DDoS attacks have got more complex overtime. In that case the key benefit of Cloudflare is they can probably identify rogue IP addresses much faster due to their size and serve up a captcha to try to reduce the problematic traffic. It certainly makes DDoS attacks harder by rate limiting abnormal refresh rates and other activity. T_D is holding up well so far.

2
mintscape 2 points ago +2 / -0

Exactly. The nasty thing that has happened with DDOS attacks is they are mostly done by criminals for hire these days. They infect a huge number of PCs with malware and they act as Zombie machines that the DDOS gangs control, they either extort money from the victims or charge people by the hour for attacks.

I would say it is close to 100% that the DDOS attacks we have had is some leftist getting on a Tor site and buying a DDOS. Cloudflare is such a good service, our mods know what they are doing ensuring that protection right from the start.

2
twopoint71 2 points ago +2 / -0

Could always run multiple load balancers and use DNS load balancing which confounds DDoS attacks pretty well. Anycast can get weird as the global topology shifts around. e.g. a client in a threshold area may start a connection to one location and might have to finish in another. An edge case to be sure, but not impossible. Anycast would require genuine statelessness to work effectively which is doable with with some good planning around session AAA.

2
deleted 2 points ago +2 / -0
1
mintscape 1 point ago +2 / -1

That's a good idea, I can tell you one thing about T_D, our mods know what they are doing, they are excellent. This site is so well run, it makes me smile as so many corporations don't come close to running things as well as our mods.

I'm a developer, got burned out from tech industry years ago but I am using my skills to help us and to start fighting back, it's a lot of fun but I have to be so careful.

2
deleted 2 points ago +2 / -0
-2
randomusers239874 -2 points ago +1 / -3

I used to work for a pretty big social media company. They all use anycast, not reverse proxies, with a multi-region setups. You don't need to have strong consistency on the data side of things, so you're incorrect. The typical setup is a pop ring equipped with a common anycast IP, with private interconnects to other locations. Updates are made to a local (or preferred) colo, then synced to the others. Because people tend to not move around to much you can rely on location to choose a preferred colo, and because this is not a live application, you don't need strong consistency between the colos. Each user seeing a slightly different thing doesn't matter since content is intrinsically ordered.