Skip to content


In April, Amazon announced a new service for DynamoDB called DynamoDB Accelerator (DAX). While you may not need a new science officer aboard your crew, you may be in the market for a fully managed, in-memory cache for DynamoDB queries.
AWS builds new services based on customer demand, so I’m sure there are plenty of organizations itching to use this service. I had a few thoughts on DAX for those who don’t know quite how best to implement it.

1. Double check your VPC

Unlike the Elasticsearch or Quicksight services, you must have a properly configured VPC. DAX can only be used within a VPC subnet and is implemented through a cluster of 1-10 EC2 instances the service spins up. You must provide a subnet for said cluster to live in, the size of each node, the number of nodes to provision, and a DynamoDB table to serve as the data origin. You’ll receive an endpoint for the cluster to which your application should point and you can begin caching reads. Make sure that you have enough IP addresses allocated for the DAX service! As with the ELB service, AWS will use available IP addresses within the VPC you’ve configured.

2. Eliminate strongly consistent reads

DAX does support strongly consistent reads, but if your application requires them, DAX will not provide any benefit. DAX instances will always reach back to the DynamoDB origin table to retrieve data for strongly consistent reads, which means there won’t be any caching occurring.

3. One change, many viewers

DAX is a write-through caching service. This is important to keep in mind as you’re determining how valuable it will be to your application. Write-through caching is best implemented for applications that require many reads immediately after a single write. Think of a stock trading floor or the leaderboard of a car race, a change occurs once, the price of a stock or the position of a racer, but hundreds of people need to see that change all at the same time. The DAX service provides value by shortening the time it takes to conduct those lookups.

4. The DAX tax

Uncached queries must travel through an additional node before they return, which could add a tiny amount of latency to that initial result, so if you’re testing query improvement after implementing DAX, be sure to run your queries multiple times in a single test.

5. Save on throughtput

An additional benefit of implementing DAX is that it can save you cost on read throughput for DynamoDB. This may not be the main reason you’re looking to use DAX, but don’t forget that if you’re caching reads, you can probably reduce the provisioned throughput on the origin table, which is a large portion of the price of DynamoDB.

6. Joined to a symbiont

Lastly, DAX must become the gatekeeper of your origin tables. If other applications bypass DAX to update DynamoDB, you’ll experience inconsistent reads for applications that are serviced by DAX. There are a few reasonable use cases for bypassing the caching service (this would be forcing a write-around caching strategy), but it’s usually unintentional and should be avoided.
If you’re interested in using the DAX service, read more about it here; if you’re eager to experiment with it, sign up for preview access here.

Categories

Categories