Anycast is just advertising the same route out of multiple locations, it's an old trick for faster DNS response and for CDNs with static content and was in use long before it got the name Anycast.
For a dynamic service like T_D that operates on a single database, anycast can just not work. You would need to lock and synchronize the databases for every write from every location. It's impossible to maintain state without a huge impractical overhead that would make things unusable.
This why even places like Reddit handle their platform with load balancers on the front end and a huge database cluster.
I used to work for Cisco, I have done a huge amount of networking and worked on ios for them, I know what I am talking about my pede.
I get the feeling you actually don't know how Anycast works, it's just advertising the same route from different locations, nothing more. I'll explain.
Say I have a site in San Fran and I have a server 8.8.8.8, I advertise to my BGP peers 8.8.8.0/24. So there exists a single route to that server in the global routing table. Now people in Australia are using my server but complain it is too slow, so I add a new server with the same content and same IP address in Australia and I also advertise 8.8.8.0/24 from my Australian site to my BGP peers there.
So what happens? The global routing table ends up with two destinations to 8.8.8.8, what's the next hop? The route with the lowest cost. So Routers closer to Australia will pick the path to the Australian 8.8.8.8 server and those closest to San Fran will pick the path to the San Fran server. If one of those sites has a power outage, then I also get fail over automatically to the next closest server. I can keep adding locations and 8.8.8.8 servers all over the world if I like.
That's it, that's Anycast. It should now be clear why that can only work with sites or data that is static or gets updated very infrequently. It should be obvious that something like a dynamically built forum with hundreds of writes per minutes isn't suitable for Anycast. It is very common for reverse proxies to be on anycast addresses, this is what has people confused, as a reverse proxy just sends traffic to the real server.
Anycast is just advertising the same route out of multiple locations, it's an old trick for faster DNS response and for CDNs with static content and was in use long before it got the name Anycast.
For a dynamic service like T_D that operates on a single database, anycast can just not work. You would need to lock and synchronize the databases for every write from every location. It's impossible to maintain state without a huge impractical overhead that would make things unusable.
This why even places like Reddit handle their platform with load balancers on the front end and a huge database cluster.
I used to work for Cisco, I have done a huge amount of networking and worked on ios for them, I know what I am talking about my pede.
I get the feeling you actually don't know how Anycast works, it's just advertising the same route from different locations, nothing more. I'll explain.
Say I have a site in San Fran and I have a server 8.8.8.8, I advertise to my BGP peers 8.8.8.0/24. So there exists a single route to that server in the global routing table. Now people in Australia are using my server but complain it is too slow, so I add a new server with the same content and same IP address in Australia and I also advertise 8.8.8.0/24 from my Australian site to my BGP peers there.
So what happens? The global routing table ends up with two destinations to 8.8.8.8, what's the next hop? The route with the lowest cost. So Routers closer to Australia will pick the path to the Australian 8.8.8.8 server and those closest to San Fran will pick the path to the San Fran server. If one of those sites has a power outage, then I also get fail over automatically to the next closest server. I can keep adding locations and 8.8.8.8 servers all over the world if I like.
That's it, that's Anycast. It should now be clear why that can only work with sites or data that is static or gets updated very infrequently. It should be obvious that something like a dynamically built forum with hundreds of writes per minutes isn't suitable for Anycast.
Anycast is just advertising the same route out of multiple locations, it's an old trick for faster DNS response and for CDNs with static content and was in use long before it got the name Anycast.
For a dynamic service like T_D that operates on a single database, anycast can just not work. You would need to lock and synchronize the databases for every write from every location. It's impossible to maintain state without a huge impractical overhead that would make things unusable.
This why even places like Reddit handle their platform with load balancers on the front end and a huge database cluster.
I used to work for Cisco, I have done a huge amount of networking and worked on ios for them, I know what I am talking about my pede.
I get the feeling you actually don't know how Anycast works, it's just advertising the same route from different locations, nothing more. I'll explain.
Say I have a site in San Fran and I have a server 8.8.8.8, I advertise to my BGP peers 8.8.8.0/24. So there exists a single route to that server in the global routing table. Now people in Australia are using my server but complain it is too slow, so I add a new server with the same content and same IP address in Australia and I also advertise 8.8.8.0/24 from my Australian site to my BGP peers there.
So what happens? The global routing table ends up with two destinations to 8.8.8.8, what's the next hop? The route with the lowest cost. So Routers closer to Australia with pick the path to the Australian 8.8.8.8 server and those closest to San Fran will pick the path to the San Fran server. If one of those sites has a power outage, then I also get fail over automatically to the next closest server. I can keep adding locations and 8.8.8.8 servers all over the world if I like.
That's it, that's Anycast. It should now be clear why that can only work with sites or data that is static or gets updated very infrequently. It should be obvious that something like a dynamically built forum with hundreds of writes per minutes isn't suitable for Anycast.
Anycast is just advertising the same route out of multiple locations, it's an old trick for faster DNS response and for CDNs with static content and was in use long before it got the name Anycast.
For a dynamic service like T_D that operates on a single database, anycast can just not work. You would need to lock and synchronize the databases for every write from every location. It's impossible to maintain state without a huge impractical overhead that would make things unusable.
This why even places like Reddit handle their platform with load balancers on the front end and a huge database cluster.
I used to work for Cisco, I have done a huge amount of networking and worked on ios for them, I know what I am talking about my pede.
I get the feeling you actually don't know how Anycast works, it's just advertising the same route from different locations, nothing more. I'll explain.
Say I have a site in San Fran and I have a server 8.8.8.8, I advertise to my BGP peers 8.8.8.0/24. So there exists a single route to that server in the global routing table. Now people in Australia are using my server but complain it is too slow, so I add a new server with the same content in Australia and I also advertise 8.8.8.0/24 from my Australian site to my BGP peers there.
So what happens? The global routing table ends up with two destinations to 8.8.8.8, what's the next hop? The route with the lowest cost. So Routers closer to Australia with pick the path to the Australian 8.8.8.8 server and those closest to San Fran will pick the San Fran path. If one of those sites has a power outage, then I also get fail over automatically to the next closest server. I can keep adding locations and 8.8.8.8 servers all over the world if I like.
That's it, that's Anycast. It should now be clear why that can only work with sites or data that is static or gets updated very infrequently.
Anycast is just advertising the same route out of multiple locations, it's an old trick for faster DNS response and for CDNs with static content and was in use long before it got the name Anycast.
For a dynamic service like T_D that operates on a single database, anycast can just not work. You would need to lock and synchronize the databases for every write from every location. It's impossible to maintain state without a huge impractical overhead that would make things unusable.
This why even places like Reddit handle their platform with load balancers on the front end and a huge database cluster.
I used to work for Cisco, I have done a huge amount of networking and worked on ios for them, I know what I am talking about my pede.