Skip to content

Among the many announcements at this year’s AWS re:Invent conference was a new service: Global Accelerator.

It’s a “networking service that improves the availability and performance of the applications that you offer to your global users.” It does so by allowing you to make use of anycast routing and the AWS global network to build resilient and performant networks.

An Example

As an example, let’s say I’m offering a web application to customers located throughout the world. A workload like this typically evolves as it becomes more popular and critical to the success of a business. I might start by deploying infrastructure for the application into one AWS region. This infrastructure then serves all customers regardless of location, even if they’re on the opposite side of the planet. They connect to these services over the public Internet, bouncing from network to network until they reach the intended destination.

Over time, I might choose to deploy resources into multiple regions and use DNS services such as Route 53 to direct people to the region that has the least amount of latency from their location in order to improve the customer experience. For example, I might make use of the Northern Virginia region in the U.S. and the Frankfurt region in Europe. This can improve the experience, but what about those in Asia? Or South America? It becomes prohibitively expensive and complex to continue deploying into more and more regions, trying to place infrastructure close to as many customers as possible.

A CDN, such as CloudFront, then becomes useful and allows you to make use of 100+ “edge” locations distributed globally across multiple continents, including even Africa. These locations become an extension of your regional infrastructure, and content can also be cached there, reducing the need to route customers all the way back to the regional origin(s) where the application has been deployed. These locations are frequently in more metropolitan areas rather than in safe zones where AWS regional data centers are built, so they’re closer to end users. An added benefit is that traffic between the edge locations and AWS regions traverse the private AWS network, which is highly available, congestion-free, and more performant than the public Internet. For some applications, CDNs are so helpful that companies don’t need to leverage multiple regions.

While CDNs, multi-region deployments, and latency-based DNS are helpful in allowing you to expand your global network presence, Global Accelerator allows you to take advantage of the underlying technologies that some of these services use, such as anycast routing and health checks, while reducing the effort you would typically have to make.

How It Works

With Global Accelerator, the customer experience looks like this:

  1. A customer in Peru, for example, goes to their browser and enters your domain name (e.g.
  2. As usual, DNS is used to resolve this domain to an IP address. Whether you’re using Route 53 or something else, you configure DNS to return one of two static IPs provided by Global Accelerator.
  3. These IP addresses are announced from multiple AWS edge locations simultaneously throughout the world and the customer gets connected to the closest location possible, such as Rio de Janeiro, Brazil.
  4. From the edge location, the customer is routed through Amazon’s private network back to the closest AWS region where your application resides, such as Northern Virginia, and directs traffic away from endpoints within those regions that are unavailable or experiencing downtime. Whether you’re using one region or many, it doesn’t matter.

In this way, you’re using routing rather than DNS to direct users to the right regions and endpoints while leveraging the AWS network without having to use CloudFront. You get instant regional failover capabilities, high availability, and better performance, without having to deal with DNS TTL delays.

Using Global Accelerator

To use the service, you create an “accelerator,” attach some “listeners” to it for various ports and protocols you want to listen on, and associate these with “endpoint groups” which are associated with specific regions. These endpoint groups then have a list of “endpoints” that traffic is load balanced across and they can include elastic IPs, application load balancers, and network load balancers. Dials and weights exist to allow you to fine tune how traffic is routed to regions and endpoints, which is helpful for A/B testing, rolling out new deployments, and disaster recovery.

The setup process is very straightforward, and the console is simple and easy to use.

How It Compares to Other Services

While CloudFront can achieve some of the same objectives, it is limited to HTTP traffic. Global Accelerator can be used for all TCP and UDP traffic. CloudFront can cache HTTP objects at edge locations and it uses a DNS name with changing IP addresses rather than static ones. Note, however, that the client IP address is not made available to backend applications using Global Accelerator, but it is available when using CloudFront (per the X-Forwarded-For header). If needed, flow logs can be turned on for Global Accelerator to log IP addresses and inspect them later.

DNS approaches, such as Route 53 latency-based routing, suffers from cached DNS responses when certain devices, browsers, and networks ignore TTLs. Based on some tests I ran, I was able to see a 10-15 second failover time between endpoints in different regions when a server became unavailable. It’s really hard to achieve that with DNS.

Other Considerations

  • Currently, Global Accelerator only supports IPv4.
  • Every “accelerator” gets Standard Shield for DDoS protection.
  • The two static IPs allocated for an accelerator are serviced by independent network zones, similar to AZs.
  • While I wouldn’t necessarily recommend it, this new service could now be used to associate a static IP address with an application load balancer, something that has traditionally required some custom work.
  • Listeners can make use of a “client affinity” setting to add stickiness to an endpoint so that network clients are routed to the same endpoint after first connecting. This can be helpful for applications that are stateful in nature.
  • For elastic IPs, you can choose what the health check is. For ALBs and NLBs, it’ll use the load balancer’s health check.


So far as pricing, you pay $0.025/hour for each “accelerator.” You also pay for the amount of traffic that flows through it and only the dominant direction of your traffic is used. For example, if 60% of your traffic is outbound, you’re only charged for the outbound portion. This charge is in addition to the standard data transfer rates, and the amount depends on what regions and edge locations are used.


Overall, there’s a lot of overlap between Global Accelerator, Route 53, and CloudFront, but there are tradeoffs you should be aware of when architecting applications that are resilient, performant, and globally available. It’s great to see these new capabilities being offered and it enables a lot of new possibilities.