But like, why not move that outside of us-east-1? That's literally always the one that goes down. Just move that single point of failure outside of the most used region by a large margin on AWS. Seems like a basic reliability engineering practice.
Each site has a network load balancer, I mean if it was an AWS wide one, it probably wouldn't be a single region that goes down. And again, this is just my guess. Having a hardware load balancer is how the place I worked at managed traffic between several server rooms, and it was usually the main culprit for downtime. It's a very handy device so your network doesn't get overwhelmed by traffic spikes, but afaik really hard to make redundant.
Oh I see what you mean. Yeah I guess it would make sense that us-east-1 goes down often since it's the most trafficked and it does have to have physical hardware that can fail.
But like, why not move that outside of us-east-1? That's literally always the one that goes down. Just move that single point of failure outside of the most used region by a large margin on AWS. Seems like a basic reliability engineering practice.
Each site has a network load balancer, I mean if it was an AWS wide one, it probably wouldn't be a single region that goes down. And again, this is just my guess. Having a hardware load balancer is how the place I worked at managed traffic between several server rooms, and it was usually the main culprit for downtime. It's a very handy device so your network doesn't get overwhelmed by traffic spikes, but afaik really hard to make redundant.
Oh I see what you mean. Yeah I guess it would make sense that us-east-1 goes down often since it's the most trafficked and it does have to have physical hardware that can fail.