Serverless backend with Terraform

Apr 05, 2021

In this article we take a look at how you can use Terraform to set up serverless backend in AWS.

Source code can be found here:


What is serverless architecture?

Serverless computing (or Faas - Functions as a service) referes to a consept where the underlying server infrastructure running your code is abstracted away.

Consider this description from the AWS docs:

“AWS Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.”

This approach is cost efficient and scales easily. You only pay for the function invocations (CPU-time + memory used) instead of an hourly price for a VM. Additional benefit is that you no longer need to spend time managing servers, and can focus more on the business logic.

What is Terraform?

Terraform is a declarative, high level tool used for infrastructure management. (Infrastructure as Code, or IaC)
Basically you write configuration files that describe the desired infrastructure using a high-level configuration language, and Terraform is in charge of figuring out how to provision those resources in the cloud.

Not only does this eliminate the need to visit the cloud UIs, but also makes your cloud infrastructure versioned, documented, easily readable and reusable.

You can read a more in depth introduction to Terraform here if you are interested.

Example application introduction

The example application we are deploying the serverless backend for is called “scottquotes” – an online compendium of the all-time greatest Michael Scott quotes.

Below is an extremely fancy, high level architecture plan for the application.

High level architecture chart

The application is going to return quotes either by season, a specific episode, or a “shorthand”.

There will be an APIGateway deployment that receives the requests, proxying them on to serverless lambda backend. The lambda will have a handler for each of the endpoints, as well as a src/ folder for shared application code.

Consider reading this short article on lambda basics to better understand the flow of this application.

Basic terraform setup

Start by creating a project folder and required Terraform files:

$ mkdir scottquotes && cd scottquotes/
$ touch

Define all the variables we are going to need:


# default region
variable "region" {
    type = "string"
    default = "eu-north-1"

# account id
variable "accountId" {
  type = "string"
  default = "[REDACTED]"

# aws profile to use (
variable "profile" {
    type = "string"
    default = "default"

# project name
variable "name" {
  type = "string"
  default = "scottquotes"

Add a “provider” block to main.ft

# provider block
# ==============================================================
provider "aws" {
    profile = var.profile
    region = var.region

Terraform works with multiple different cloud providers, so the provider block is used to inform Terraform that we want to specifically use AWS.

VPC and subnets

Since we are operating on AWS, the first thing we should define is the VPC and subnets.
AWS gives you a default VPC and 3 subnets per region. These cannot be created or removed, but we can tell Terraform to start tracking them:

# import default subnets and VPC
# NOTE: these are not created or managed by terraform!
# ==============================================================
resource "aws_default_vpc" "default" {
    tags = {
      name = "default VPC"

resource "aws_default_subnet" "default-subnet-az-a" {
    availability_zone =["a"]
    tags = {
      name = "default subnet a"

resource "aws_default_subnet" "default-subnet-az-b" {
    availability_zone =["b"]
    tags = {
      name = "default subnet b"

resource "aws_default_subnet" "default-subnet-az-c" {
    availability_zone =["c"]
    tags {
      name = "default subnet c"

The above resources are using an az variable to map availability zones to subnets, so that needs to be added to

Replace your subnet AZ x parts with your default subnet availability zones:

# default subnet mappings
variable "az" {
    type = "map"
    default = {
        "a" = "<your subnet AZ a>",
        "b" = "<your subnet AZ b>",
        "c" = "<your subnet AZ c>"


We want three lambda functions, that will be mapped to our three API-endpoints. Additionally, we also need an IAM-role for the lambda functions to use:

# Lambda
# ==============================================================
resource "aws_iam_role" "lambda_iam_role" {
  name = "iam_for_lambda"

  assume_role_policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": ""
      "Effect": "Allow",
      "Sid": ""

resource "aws_lambda_function" "shorthand" {
    role = aws_iam_role.lambda_iam_role.arn
    filename = ""
    function_name = "shorthand-handler"
    handler = "lambda/shorthand-handler.handler"
    runtime = "nodejs12.x"

resource "aws_lambda_function" "season" {
    role = aws_iam_role.lambda_iam_role.arn
    filename = ""
    function_name = "season-handler"
    handler = "lambda/season-handler.handler"
    runtime = "nodejs12.x"

resource "aws_lambda_function" "episode" {
    role = aws_iam_role.lambda_iam_role.arn
    filename = ""
    funtcion_name = "episode-handler"
    handler = "lambda/episode-handler.handler"
    runtime = "nodejs12.x"

Let’s go over the lambda resource variables:


For this project i want three different endpoints. Brackets signify a path-parameter.

/api/season/{season} - return all quotes for a season
/api/season/{season}/episode/{episode} - return quotes for a specific episode
/api/shorthand/{shorthand} - return quotes for a specific episode via a shorthand (e.g. S2E8)

First lets define the base APIGateway resource and the base /api/ path-part that is going to prefix every url:

# ==============================================================
resource "aws_api_gateway_rest_api" "api" {
    name = "${}-backend-api"

# prefix all paths with /api/
resource "aws_api_gateway_resource" "base-path" {
    rest_api_id =
    parent_id  = aws_api_gateway_rest_api.api.root_resource_id
    path_part  = "api"

Note: path_part value should not contain any forward-slashes, APIGateway will handle those internally.

Now that we have the base /api path defined, we can start adding path parts to it:

# /shorthand endpoint
resource "aws_api_gateway_resource" "shorthand" {
    rest_api_id =
    parent_id  =
    path_part  = "shorthand"

resource "aws_api_gateway_resource" "shorthand-value" {
    rest_api_id =
    parent_id =
    path_part = "{shorthandValue}"    

The curly brackets in {shorthandValue} tell APIGateway that this path-part should be treated as a path-parameter.
APIGateway will expose any and all path parameters to us automatically when the request is proxied to lambda.

Now that we have an endpoint, let’s define a HTTP-method for it:

resource "aws_api_gateway_method" "shorthand-get" {
    rest_api_id =
    resource_id =
    http_method = "GET"
    authorization = "NONE"

This API will be public, so authorization is not necessary.
If you have the need to restrict access to your API, the list of available authorization methods can be found here.

Now to add the lambda integration:

resource "aws_api_gateway_integration" "shorthand-get-integration" {
    rest_api_id =
    resource_id =
    http_method = aws_api_gateway_method.shorthand-get.http_method
    integration_http_method = "POST"
    type = "AWS_PROXY"
    uri = aws_lambda_function.shorthand.invoke_arn

    depends_on = [ aws_api_gateway_method.shorthand-get ]

When targeting lambda via AWS_PROXY the integration_http_method must always be set to POST regardless of the method that is used to invoke the API-endpoint.

URI defines the target AWS resource we want to proxy the HTTP event to – in this case it’s our lambda function.

Grant APIGateway permission to invoke this particular lambda function:

# permissions for APIGW to invoke lambda
resource "aws_lambda_permission" "shorthand-lambda-permission" {
    statement_id = "AllowExecutionFromAPIGateway"
    action = "lambda:InvokeFunction"
    function_name = "arn:aws:lambda:${var.region}:${var.accountId}:function:${aws_lambda_function.shorthand.function_name}"
    principal = ""
    source_arn = "${aws_api_gateway_rest_api.api.execution_arn}/*/*/*"

    depends_on = [ aws_lambda_function.shorthand ]

Let’s go over the resource parameters:

That’s it for the shorthand endpoint.

We have now defined an endpoint /api/shorthand/{shorthand} that upon receiving a POST request will invoke a lambda function defined in the integration resource.

I won’t go through the other endpoints here as not to repeat myself, but you can view the full source code at github if you are interested.

Finally an API deployment and stage need to also be defined. Stages in APIGateway refer to a snapshot of a pre-existing API, that will be made available with callable endpoint URL’s.

# APIGateway deployment
resource "aws_api_gateway_deployment" "deployment" {
    rest_api_id =
    stage_name = ""

    lifecycle {
      create_before_destroy = true

    depends_on = [

resource "aws_api_gateway_stage" "stage" {
    stage_name = "${}-backend-api"
    rest_api_id =
    deployment_id =

Terraform documentation recommends you add all the API resources as dependencies to aws_api_gateway_deployment, as well as adding a lifecycle policy in order to work around some issues described here.


This one is really simple. We just want to have a single table to store all the quotes in.

DynamoDB might be a little confusing if you have only been using traditional non-relational databases. All you need to know here is that as long as the amount of data in this example table stays relatively small, you should be able to stay within the free-tier. Otherwise it is highly recommended to read through the DynamoDB docs, because when used incorrectly it can get expensive pretty fast.

# DynamoDB
# ==============================================================
resource "aws_dynamodb_table" "scottquotes" {
   name           = "scottquotes"
   billing_mode   = "PROVISIONED"
   read_capacity  = 25
   write_capacity = 25
   hash_key       = "shorthand"

   attribute {
     name = "id"
     type = "S"

If you are familiar with DynamoDB, you might notice that unless i add secondary indexes, i will be forced to scan the whole table for each query.

This is an extremely inefficient way to use DynamoDB, but i am still opting to do so for two reasons; I want to remain in the AWS free tier, and the amount of data will be so small that exclusively using scanning shouldn’t be a problem.

Terraform backend

Before deploying the infrastructure, let’s create a backend for the Terraform state to live in.

A “state” in Terraform is a file that is used to store and map your infrastructure to real world resources, and is absolutely vital for Terraform to work.

You could store the state files on your local machine, but that might lead to problems if someone else wants to modify the infrastructure. (or if you accidentally rm -rf the folder :D) Git could be used as well, but you can still run into problems where multiple people will have different versions of the state files, they try to modify the resources at the same time, etc.

Both of these problems are eliminated by using a centralized, real-time updating store for the state files called Backends

Create a new private S3 bucket by hand, and then add it’s information to a backend resource at the top of the file:

# TF backend
# ==============================================================
terraform {
    backend "s3" {
      bucket = "scottquotes-tf-backend"
      key = "scottquotes-tf-backend/terraform.tfstate"
      region = var.region

Terraform will now read and write the state of your infrastructure into this S3-bucket, instead of the local system.

If you have multiple people working with the same Terraform states, you should absolutely add a lock for the state file as described here.

Run deployment

Now that all the infrastructure has been defined, it’s time to deploy.

Start by initializing terraform:

$ terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.34.0...
- Installed hashicorp/aws v3.34.0 (signed by HashiCorp)


Terraform has been successfully initialized!

You can now run terraform plan to see what will be deployed:

$ terraform plan

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

[List of stuff to be created]

Plan: 26 to add, 0 to change, 0 to destroy.


Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Terraform has parsed the configurations files, and verified that they are in order.

Before deploying these resources, a file needs to be created, because we specified it in the lambda function blocks. Without the file, terraform will throw an no such file or directory exception.

$ echo "console.log('hello world')" > foo.js && zip foo.js && rm foo.js

Note that Terraform will throw an error if the file cannot be uncompressed – it really has to be a zip file!

Run Terraform apply:

$ terraform apply --auto-approve
aws_default_subnet.default-subnet-az-a: Creating...
aws_default_subnet.default-subnet-az-c: Creating...
aws_iam_role.lambda_iam_role: Creating...
aws_default_subnet.default-subnet-az-b: Creating..


aws_api_gateway_stage.stage: Creating...
aws_api_gateway_stage.stage: Creation complete after 0s [id=ags-sa22lemynh-contactor-backend-api]

Apply complete! Resources: 28 added, 0 changed, 0 destroyed.

If you used configurations from the repository i provided for this guide, everything should work out of the box. If not, Terraform generally has pretty good error messages for debugging the configuration.

If you want to destroy the resources, that can be achieved with:

$ terraform destroy --auto-approve


In this blog post we went over how to define cloud infrastructure as code for an example application using Terraform. Actually uploading some working code to the Lambda functions (and how to use lambdas overall) is left as an exercise to the reader.