August 13th, 2020
Saving Money on EC2 with Spot Instances
By Jordan Cannon

When it comes to AWS Elastic Computing (EC2), there are a few different pricing options to pay for compute needs. If you’re currently using EC2 and not using Spot Instances, you could be paying significantly more than you need to for an identical setup.

When I first started launching EC2s back in the day, I would build everything using On-Demand pricing. This worked great in learning and testing systems while I didn’t know what type of results I wanted to achieve, but eventually I started building more concrete, long-lasting projects and found myself needing to run these for a significant amount of time. For me, reserved instance pricing was the next logical step. I could reserve an EC2 instance for a year and receive a big discount towards my bill for the commitment. It was an easy option to understand: pick your dates then purchase. However, there was yet another pricing model for EC2 called Spot Instances that left me unsure. It always seemed too difficult and slightly daunting to deal with. Concepts like bidding and using what they called spare capacity kept me away. After finally taking the plunge, I can say that Spot Instances are simpler than ever before, thanks to recent changes from AWS—and they work well for a lot of use cases.

In this article we will focus on using Spot Instances for just one of those many use cases: web servers!

What is Spot?

According to AWS, “Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.”

When other customers are not using the excess capacity of the EC2s, AWS offers it to you. When the excess capacity is needed again for said customers, AWS removes it from you by terminating or hibernating your instance.

Spot Instances are essentially the same thing as On-Demand or Reserved Instances; it’s just another way to pay for it. All three pricing models use the same underlying EC2 architecture. All instance types offered through On-Demand and Reserved pricings are available in Spot: c5, a1, t3a, etc.

Why use Spot?

Why should you use Spot instances? Because of the savings! With the Spot pricing model, you can save up to 90% in EC2 costs compared to On-Demand. Spot Instances allow for flexibility with the option to launch, terminate, or hibernate your instances at any time and you only pay for what you use.

Also, Spot does not require any major architectural overhaul to use. This is great news if you are already using EC2 instances to run your applications. You’re still given the option to use other EC2 resources/features in conjunction, like ELB and Auto Scaling.

And with the change of no longer being forced to bid for pricing and needing to understand all the extra steps and processes that follow along with that, getting started is much easier.

It’s a cheaper, easy-to-set-up solution that is simple to architect for current or new EC2 workloads.

Architecting for Spot

If you decide to use Spot it’s important to understand and to plan for a few key concepts. These concepts follow best practices, so they are great to implement regardless. Let’s review them.

Plan to Make Your Server Interruption Tolerant

Spot works with supply and demand to dictate pricing. If capacity is needed elsewhere your Spot Instance will be terminated, with a two-minute grace period, to fulfill that demand. Along with that, Spot Instances do not last indefinitely. We need to build our architecture for the unforeseen interruption of instances by following these practices and making our servers:

  • Stateless – No saving of permanent data to the instance, it is ephemeral. Because of interruptions you may have an instance taken down at any time.
  • Decoupled – Because the instance is ephemeral nothing else should depend or run on a specific instance. Dependencies need to be minimized.
  • Multi AZ – The more availability zones your instances are spread across the more available capacity pools you can pull from. Along with greater capacity options you get higher availability for your users. A minimum of two availability zones is recommended.

Make Your Instance Type Flexible

Because of interruptions with Spot you will need to diversify the instance types you use so you are capable of pulling from many different capacity pools.

Capacity pools for instance types are separated by region, availability zone, instance family, and instance size. If there is no spare capacity for a m3.medium in AZ1, there could be spare capacity for m3.medium in AZ2—or m3.large in AZ1 could have available capacity for use as well.

You’ll need to consider running your instances on additional instance types. The more flexible you are with this the easier it will be to find the lowest prices with more options to pull from for your capacity needs.

Let’s build something

Here is an overview of the architecture we’ll be building for our web server. It uses an application load balancer with auto scaling. We will be using two availability zones with an array of EC2 instance types under Spot to allow for flexibility.

Step 1: Create an ELB – Application

Overview

The Application Load Balancer will distribute traffic and requests to our EC2 instances.

Directions

  • Go to EC2 > Load Balancers on your dashboard and create an Application Load Balancer.
  • Name this spot-demo-load-balancer.
  • For this application leave the listeners set to HTTP only and not deal with setting up HTTPS. For a production environment you will definitely need to set up HTTPS.
  • Under Availability Zones set VPC to the one we want.
  • Select the two public subnets in the availability zone then tick the box next to the selected subnet.
  • Leave everything else on this page default.
  • Click “Next” to go to Configure Security Groups.
  • Create a new security group and give it a name spot-demo-security-group.
  • Allow all traffic.
  • Allow HTTP traffic.
  • Click “Next” and go to Configure Routing.
  • Create a new target group.
  • Name it spot-demo-target-group.
  • Leave everything else default on the page.
  • Click “Next” until you reach the Review page.

Your setup should match what I have here. Review your settings for the Application Load Balancer to verify.

  • Click “Create” to begin creating the ELB.

Step 2: Create a Launch Template

Overview

The Launch Template will serve as our blueprint for creating EC2s. When the Auto Scaler needs to launch an EC2, it will look at this Launch Template.

Directions

  • Go to EC2 > Launch Templates.
  • Create a new launch template.

  • Name the launch template spot-demo-launch-template.
  • Select “Auto Scaling guidance.” This is required to work with auto scaling.
  • For the Launch template contents select both the AMI and the Instance type for our base setup.
  • Select Amazon Linux 2 with the 64-bit (x86) architecture for AMI.
  • Select a t3a.small for the instance type
  • For network settings choose VPC as the platform.
  • For the security groups select the spot-demo-security-group created in the Application Load Balancer creation process.

As an optional step you can add instance tags to be automatically applied to each EC2 instance that is created by the Auto Scaling.

  • The final step will be to go into Advanced details and add this script to the User data.

#cloud-config
repo_update: true
repo_upgrade: all

packages:
  - httpd
  - curl

runcmd:
  - [ sh, -c, "amazon-linux-extras install -y epel" ]
  - [ sh, -c, "yum -y install stress-ng" ]
  - [ sh, -c, "echo hello world. My instance-id is $(curl -s http://169.254.169.254/latest/meta-data/instance-id). My instance-type is $(curl -s http://169.254.169.254/latest/meta-data/instance-type). > /var/www/html/index.html" ]
  - [ sh, -c, "systemctl enable httpd" ]
  - [ sh, -c, "systemctl start httpd" ]

This script will print out the instance id and instance type when we access any of the servers.

  • Leave all other options to their default settings.
  • Now create the template.

Step 3: Create an Auto Scaling Group

Overview

The Auto Scaling Group will handle the scaling of our EC2 instances as well as selecting from a list of available capacity pools for Spot. This makes managing Spot Instances dead simple and hands free.

Directions

  • Go to EC2 > Auto Scaling Groups.
  • Create an Auto Scaling Group.
  • Name the Auto Scaling Group spot-demo-auto-scaling-group.
  • Select the Launch Template spot-demo-launch-template we created the previous step.
  • Leave all options default and click “Next.”
  • Alter the Launch Template a little to allow for use of Spot Instances. To do so select “Combine purchase options and instance types.”

From here we can decide how we want to set up our Auto Scaling Group to deliver Spot Instances.

  • On-Demand Instances: We can set a minimum number of On-Demand Instances to use. This makes sure we will always have an EC2 running even if all capacity for Spot Instances is exhausted. Sort of a safety net. Set this to 1.
  • % On-Demand: This is how much of the pricing model we want the Auto Scaler to launch. We already have a base On-Demand instance of 1 set for our safety net, so we only want a small amount more of On-Demand instances launched by the Auto Scaler on top of that safety net. Set this to 10.
  • % Spot: Because we set % On-Demand to 10 this should automatically adjust to 90. These two settings are proportional. The goal is to save money, we’ll leave this as an aggressive value of 90 because we want more Spot Instances to be launched by the Auto Scaler. Set this to 90.
  • Select which Spot Instance types we want to use for our Auto Scaling. This was pre-populated based on the t3a.small we set in the Launch Template. This allows us to be flexible across many instance types. The more instances you include the greater your flexibility.

A good tool to see prices and related instance types based on a region and instance vCPU/Memory is offered by AWS here. You can go there to better see what instance types could fit your needs allowing you to increase your instance type flexibility.

This list is prioritized from first choice (top) to last choice (bottom) and can be re-ordered using the arrows.

You can also get more granular with the weight option. This is a measurement of capacity you want to use, not priority. If you have your Auto Scaling Group set to launch a desired capacity of 4 instances and t3.large has a weight of 4, Auto Scaling will only launch a single t3.large. For the purposes of this demo the pre-populated options will do just fine.

  • Finally, at the bottom of this page, place this Auto Scaling Group into our VPC and select the two public subnets that were selected during the creation of our Application Load Balancer. Now click “Next.”
  • Select the “Enable load balancing” box.
  • Choose the Target Group we created in the Application Load Balancer process.
  • Click “Next.”
  • Set capacities.
    • Desired: Set to 4. We want to start the workload with this number of instances.
    • Minimum: Set to 2. For the workload to be considered healthy we need a minimum number of instances.
    • Maximum: Set to 8. We will allow the workload to scale up to this at most.

Target Tracking looks at metrics to determine whether to scale in or to scale out. We can select a list of metric options to monitor and to base our scaling policy on.

  • For Metric type select “Application Load Balancer request count per target.”
  • Select the Target Group we created in the Application Load Balancer process.
  • Set Target value to 50. This prevents going over 50 request counts per second. If we do go over that threshold the Auto Scaler looks at our capacity limits we set and then decides whether or not to add capacity to handle the load (i.e., scale out).
  • Click “Next” until you hit Review.

Now you can review your options. If everything is correct go ahead and create the Auto Scaling Group.

Step 4: Test it!

With everything correctly created we can test our web servers by visiting them.

  • Go to EC2 > Instances

We can see the four new instances created by our Auto Scaling Group along with their Lifecycle type.

  • Now go to EC2 > Load Balancers and grab the spot-demo-load-balancer DNS name. Then navigate to that link using your browser. Yours should something similar to this:
  • Keep refreshing the page and you should see a new instance-id or instance-type. This is how you know everything was properly set up and you now have Spot Instances working as your web servers.
  • All finished!

Final Thoughts

In this tutorial we have learned about Spot Instances, reasons to use them, and how to set them up for web servers. They are a great way to save money without having to design any cutting-edge or complicated architecture. A lot of the heavy lifting is handled with Auto Scaling and you get to sit back and reap the savings.

There are a slew of workload types you can use Spot for: big data, containers, CI/CD are just a few examples that fit the use case. Vary your workloads with Spot and see the savings.

If you’d like help incorporating Spot Instances into your workloads, please reach out to info@1strategy.com and have one of our experts help you.

If you’re interested in additional cost savings, check out “First Steps to Cost Optimization,” by Ted Swinyar or learn more about how 1Strategy can help you save money and resources HERE.