Skip to content

Serverless CI/CD Tutorial, Part 2: Test

Integration testing Serverless applications with Jest, CodeBuild and CodePipeline

This blog is the second post in a three-part tutorial on setting up CI/CD pipelines for apps built with the Serverless framework. In the first tutorial, we created a CodeBuild project to lint and unit test our code, then added that project to CodePipeline. This tutorial covers integration testing using Jest, plus configuring our CodeBuild project to deploy a test environment, test against it, and tear it down. Our final tutorial will see us extend the pipeline to include deployments to staging and production environments. All of the code is available in our repo.
Since we already have our CodeBuild project and pipeline created, we have four steps to tackle to get integration testing working. They are:

  • Writing integration tests
  • Adding commands to our buildspec.yml to deploy our test environment, run tests, and tear down the test environment
  • Communicating the endpoint of our test environment to our integration tests (we’ll do this with serverless-stack-output)
  • Creating an IAM role for our CodeBuild project so it can deploy and remove our test environment

Let’s get started!
 

Creating Integration Tests with Jest

You can use any test framework to create your integration tests; here, we’re using Jest, with Supertest added in for HTTP assertions. To get started, install Supertest and Jest as dev dependencies:
npm install --save-dev jest supertest
You may also need to install Jest globally, so you have access to the jest test command during development.
Next, let’s create an integration test file. We’ve created unit and integration subdirectories within our __tests__ directory (the __tests__ directory is where Jest looks for test files). Besides being tidy, this will allow us to run our unit and integration tests separately. The integration test file goes in the integration directory, like so:

Now, let’s look at the integration test file. Here’s our first test:


const url = 'http://localhost:3000';
const request = require('supertest')(url);
describe('/todos routes', () => {
  it('POST /todos returns an empty object', () => {
    const postObj = {
      title: 'Feed the cats',
      completed: false
    };
    return request
      .post('/todos')
      .send(postObj)
      .expect(200)
      .then((res) => {
        expect(res).toBeDefined();
        expect(res.body).toEqual({});
      });
  });
});

Let’s walk through what’s going on here. We’re using Serverless Offline, along with serverless-dynamodb-local, to expose our app’s endpoints locally. This means we can also run our integration tests locally, against the localhost endpoint. For now, this is fine; in the next step, we’ll add the endpoint of the deployed test app, so we can run the test in our CodeBuild project.
Jest uses a familiar describe block, with it blocks inside of it. Inside each of those it blocks, we return a Supertest promise to the Jest runner. Supertest comes with a few promise chains for our use; here, we’ve used an HTTP method, .post; a .send to send our request body; and one of the expect methods that is available out-of-the-box with Supertest. The callback inside the .then will be handled by Jest, so we can use any of the Jest expect  methods in that callback.
Our next test looks similar:


it('GET /todos returns a list with the previously posted todo', () => {
    request
      .get('/todos')
      .expect(200)
      .then((res) => {
        expect(res).toBeDefined();
        expect(Array.isArray(res.body)).toBe(true);
        expect(res.body[0]).toHaveProperty('title', 'Feed the cats');
        expect(res.body[0]).toHaveProperty('completed', false);
        expect(res.body[0]).toHaveProperty('id');
        expect(res.body[0]).toHaveProperty('updatedAt');
      });
  });

We could seed our test DB in advance, but for simplicity’s sake we’ll run the GET test after the POST test.
We’re using npm scripts to run tests and lint, so let’s add the test command to our package.json. Our scripts section now looks like this:


  "scripts": {
    "test": "jest __tests__/unit/*",
    "lint": "eslint */*.js",
    "integration": "jest __tests__/integration/*"
  },

This lets us run our integration tests locally with:
npm run-script integration
And voila! We have integration tests.
 

Creating a Test Environment

Running tests locally is nice, but we want to execute them against a deployed instance of our app. We also want that instance to be transient: it should only be deployed for as long as we need to run our tests, and then it should be torn down. Every time we run our tests, we should get a fresh, new instance of our app. Not only does that ensure a clean test environment, it also keeps our AWS bill down.
One note: best practice would be to separate our integration testing into a different CodePipeline stage. For the sake of brevity, we’ll be using the same CodePipeline stage, and the same CodeBuild project, for both unit and integration testing.
With that in mind, let’s turn to the buildspec.yml we created in the last tutorial. The four steps we need to do are:

  • Install the Serverless framework
  • Deploy our test environment
  • Run integration tests against that environment
  • Remove our test environment

 
First, we need to install Serverless. Let’s add that to our install section, like so:


phases:
  install:
    commands:
      - npm install
      - npm install -g serverless

Next up: deploying. A best practice relating to serverless apps in general is that every environment should be in its own “stack.” To understand what that means, we need to talk about CloudFormation. CloudFormation is AWS’s version of infrastructure as code. You write CloudFormation templates to create, update, and delete resources on AWS. The collection of related resources created by a single CloudFormation template is called a “stack.”
When we use the Serverless framework to scaffold our serverless apps, we declare our app configuration in our serverless.yml. When we deploy, Serverless takes our serverless.yml, identifies the needed AWS resources, and translates that into a CloudFormation template. The resources in that template then become part of a single CloudFormation stack.
The thing is, if we’ve been doing serverless deploy during development, and then we serverless deploy to create our test environment, and then again to deploy to production, we’ll still only have one stack—essentially, only one environment for our app. To create separate stacks, the Serverless framework gives us the --stage option. --stage creates a different stack for each “stage” name, whether it’s dev, test, or prod. Every time you deploy, you should be deploying to a given stage.
Let’s add the deploy command to the build section of our buildspec.yml, followed by our test command (-v makes it verbose):


  build:
    commands:
      - npm run-script lint
      - npm test
      - serverless deploy --stage test -v
      - npm run-script integration

Great! Now we need to tear down our test environment when we’re done. We want this to happen whether our tests succeed or fail, so it goes in the post_build phase; post_build runs regardless of the outcome of the build phase. Our final buildspec.yml looks like this:


version: 0.2
phases:
  install:
    commands:
      - npm install
      - npm install -g serverless
  build:
    commands:
      - npm run-script lint
      - npm test
      - serverless deploy --stage test -v
      - npm run-script integration
  post_build:
    commands:
      - serverless remove --stage test -v

Next up: communicating the test endpoint to our integration tests.
 

Using serverless-stack-output to Communicate the Deployed Endpoint

Whenever we deploy a Serverless app, it’s assigned a new, random service endpoint by CloudFormation. Often, we deploy once, and then keep updating that deployment—so the app endpoint never changes. Since we’re removing our test app every time we run a build, each subsequent deployment of our test app will have a new service endpoint. While that service endpoint is output to stdout on each deploy, it’s difficult to parse out without some serious Bash fu.
Enter serverless-stack-output! This plugin saves the outputs from a Serverless deployment (technically, it saves the CloudFormation stack output) into a JSON file. We can then read from that file in our integration tests.
First, we’ll need to install it as a dev dependency:
npm install --save–dev serverless-stack-output
Next, add it to the serverless.yml as a plugin:


plugins:
  - serverless-dynamodb-local
  - serverless-offline
  - serverless-stack-output

And configure that plugin to save output to our desired file:


custom:
  output:
    file: .build/stack.json

Next, create our stack.json file at the path we just selected. Since the only part of the stack output we care about is the endpoint, that’s the only property in our object. The localhost endpoint is our default:


{
  "ServiceEndpoint": "http://localhost:3000"
}

Finally, update the integration test to use the stack.json as the source for the service endpoint:


const stackOutput = require('../../.build/stack.json');
const url = stackOutput.ServiceEndpoint;

Excellent! We’ve created our tests, our buildspec.yml brings up and destroys our test environment, and our tests are configured to use the correct service endpoint. Finally, let’s give our CodeBuild project the permissions it needs to deploy and remove our test environment.

Create an IAM Role for our CodeBuild Project

IAM roles confer certain privileges: for example, the ability to create Lambda functions or delete API Gateways. When our CodeBuild project runs, it assumes a given role and—by extension—a certain set of permissions. The standard role created for CodeBuild projects is fine for things like unit tests, but doesn’t have the correct permissions for a Serverless deployment.
On AWS, the permissions granted to a role are spelled out in policies. Policies are standalone documents—think JSON files—that are “attached” to roles. To create a role with the permissions we need to deploy our test environment, we’ll do the following:

  • Create a policy with the right permissions for deployment/deletion
  • Create a role for our CodeBuild project to assume
  • Attach our new policy to this role
  • Attach the pre-existing policy for our CodeBuild project to this role
  • Tell CodeBuild to use this new role

 

Creating an IAM policy for Serverless deployment

Much has been written about appropriate IAM policies for deploying Serverless apps—and rightfully so; like anything on AWS, we want to use the “least-privileged” role possibly, meaning, it can access only what it needs to, and nothing more (for more on Serverless policies, check out some great discussions in this blog and this GitHub issue).
That gets tricky with serverless, because our apps typically span many AWS services, thus requiring many permissions. We’re going to show you one way to create your policy. Like any policy, you should ABSOLUTELY sanity-check it to make sure you’re not giving permissions to something you shouldn’t, and you SHOULD NOT use it in production without a thorough review.
We’re going to use a yeoman generator, serverless-policy, to get started. Install yeoman:
npm install –g yo
And then install serverless-policy generator:
npm install -g generator-serverless-policy
Creating our starter policy is then as easy as:
yo generator-serverless-policy
and following some prompts:

Just like that, we have a JSON file containing the policy for our test stage. This is a great start; now, we need to add two things for it to work in our situation. Open it up in an editor, and add:
cloudformation:ValidateTemplate for all CloudFormation resources:


    {
      "Effect": "Allow",
      "Action": [
        "cloudformation:List*",
        "cloudformation:Get*",
        "cloudformation:PreviewStackUpdate",
        "cloudformation:ValidateTemplate"
      ],
      "Resource": [
        "*"
      ]
    },

and logs:DeleteLogGroup for logs in our region (required for deleting our test stage):


    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:DeleteLogGroup"
      ],
      "Resource": [
        "arn:aws:logs:us-west-2:*:*"
      ],
      "Effect": "Allow"
    },

Next, let’s add this policy to AWS. Navigate to the IAM console and select Policies from the left-hand side. Click Create Policy and then click on the JSON tab. Replace the existing JSON with the JSON from the policy we just created. Click Review Policy and give the policy a name. Like our deployments, we’ll have one policy for each environment (test, staging, and prod), so we recommend naming it “your-project-name-test” or something similar. Click Create Policy and you’re good to go!
 

Creating a Role and Attaching Policies

Now, select Roles from the left-hand menu. Click on Create Role; this will bring up a three-part wizard to create our role and attach policies to it.
On this first screen, select CodeBuild as the entity that will use this role. Click Next: Permissions.
On the second screen, we need to add two policies. Use the search box to find the name of the policy you just created, and check the box next to its name. Then, search on the name of your CodeBuild project. Check the box for the policy called CodeBuildPolicy-your-project-name-here (this is the policy that your CodeBuild project is currently using). Click Next: Review and you should have a screen like this:

Add your role’s name; for consistency, we’ve used the same name as our CodeBuild project. Click Create role and you should see your role in the list.
 

Configuring CodeBuild to use our role

Almost there! Now we need to tell CodeBuild to assume this role when it runs our builds.
Select CodeBuild from the services menu and then click on the name of your project. Click Edit Project and scroll down to the Service Role section. In the dropdown, choose the role you just created and then unselect the Allow AWS CodeBuild to modify this service role box; the role you created has all the permissions it needs.
 

Fin

And we’re done! Push your code and watch your CodeBuild project work. If you check the logs, you should see it deploy the test environment, run your integration tests against it, and then tear down that environment. In the next tutorial we’ll cover deployments to staging and production. If you have additional questions or would like to schedule a consultation with one of our AWS Experts, please reach out to us at Info@1Strategy.com.
 

Acknowledgements

Thanks go to the contributors to this Serverless GitHub issue, the writers of the various blog posts I referenced, and the creators of serverless-stack-output and the Serverless Policy Generator. You have made this work much easier.
 
 

Categories

Categories