Deploying a HTTP accessible AWS Lambda via Terraform

A placeholder image with a star in the middle and a grid of lines with labels indicating the distance to the border. 720x360 is written in large letters in the center of the image.

In order to learn Terraform, I decided to build a small website with AWS, using Lambda, S3 and CloudFront. No data bases, just something small.

This article is part of the "Terraform - placeruler.knappi.org" series, you may also want to read the other articles, especially the older ones:

I have spent the last years developing web-frontends, with a focus on test-automation and clean code.

While there is still a lot to explore in that area, I feel like I should gain some experience in other technologies as well. One side that I have only very seldom touched is DevOps and infrastructure-as-code and that is an area where I want to learn more. My colleagues are especially fond of Terraform, so this is what I’ll try next.

Eventually, I would like to build a real website with backends for my popup-cards project. But to learn Terraform, I decided to do something simpler.

placeruler.knappi.org

Have you ever needed placeholder images for testing the layout of you React components? Well, https://placekitten.com/ was the place to go (it seems to be down now). You could just enter a simple URL to generate cat-photos of any size. If you like that, there is still https://placebeer.com/, https://placebear.com. Guess what pictures that create… And there is https://placehold.co/ which allows you to do a lot of customizations to the picture, but in essence just generates images containing some text.

While I liked using those sites, there are some things that I always missed:

  1. When I use object-fit: cover; in CSS, the images are cropped. I would like to see how much is cropped, but this is difficult to tell from bear images.
  2. I once tried to write visual regression tests for our storybook and those tests failed, because we were using placekitten in the stories and those images changed over time. This cause the tests to fail…

Last week, with occasional support from my colleagues Roman and Jacob, I built a small placeholder generator myself. It generates a star-like image that allows you to estimate how much of the image has been cropped in the website. It also contains a grid of lines with a label indicating the pixel-distance to the border of the image. The site is called https://placeruler.knappi.org you can see an image example at the top of this post.

The stack

I will probably write multiple blog posts about this project. But here is an upfront overview of the stack I used:

  • AWS Lambda: I started out with writing a Lambda, because this is the cheapest way to host backend functionality if you do not have a lot of traffic. There is a generous free tier, you only pay what you use, and it scales up automatically.
  • CloudFront: I wanted to cache the generated images. Partly for performance reasons, but also to save costs on the Lambda. CloudFront offers a global network of edge locations, and can cache image data. And since it is also an AWS services, it seemed like the natural choice.
  • Route53: But default, you get cryptic domain names for your CloudFront distributions and also for Lambda functions. But I wanted to have a nice domain name, so I thought Route53 would be necessary to handle DNS lookups. Later I found out that you that I was wrong. Spoiler: Terraform can also modify my existing DNS zones at Hetzner. So instead of using Route53 I simply created CNAME record at Hetzner directly.
  • S3: While the generated images are cached by CloudFront, I also wanted to create a homepage for the project. The natural place for this in AWS is S3, so I created a bucket for the website as well.

Infrastructure as Code and Terraform

I could have created these services by clicking in the AWS Console. But infrastructure-as-code has a lot of advantages. You can version it. You can create multiple deployments. You can share it with others. And you can automate it.

So I did click the Console, to get an impression of what I can do. But very quickly I started writing Terraform code instead of clicking. There are some things that I would like to mention upfront:

Verbose configurations

When you deploy multiple AWS services with Terraform, everything feels very verbose. You might think you just have to define a small block of code for each part of the stack. But no, there are roles and policies required. All the security configuration that is made very quickly in the AWS Console needs to be specified in detail when using Terraform. This may be more work than expected, but in the end I think it is a good think. That way, you can see everything in the code that is created.

Costs

I expected this to be a project that does not cost me anything, because Lambda, CloudFront and S3 all have a free tier. Even the SSL certificates are free. But I did pay for the Route53 hosted-zone, which I did not expect. The 0,50€ did not hurt a lot, but I will definitely always look at the cost of each service before including it into my Terraform files. Writing Terraform code feels likes programming. And running your code usually does not cost any money. But with infrastructure-as-code it is infrastructure that is deployed around the world. And this does cost money. And if it is something that costs money every time you run it, you might be surprised about the bill after testing the code 100 times.

On the other hand, I could have avoided some costs (about 1€ plus taxes) if I had written Terraform code consequently. I created a CloudFront distribution via console in the beginning and played with it. I added some WebACLs and delete the CloudFront distribution, not aware that the WebACLs were still there. I only noticed that my month-to-date cost was increasing even though everything was in the free tier. Terraform would have removed those rules in the first place.

Building the lambda

Now lets look at some code. How do you build a lambda. I you want you can create an example project and follow the stack. Note, that I do not cover the whole process of creating and AWS account along with secure ways to access them from terraform. This might be another blog post. I will also not talk about how to install Terraform I use asdf to manage my tools, but you can do this how you like.

Initialize Terraform

Initially, you can create a main.tf terraform file with the following content:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    archive = {
      source  = "hashicorp/archive"
      version = "~> 2.0"
    }
  }
  required_version = ">= 0.13"
}

# Configure the AWS Provider
provider "aws" {
  region = "eu-west-1"
}

This does not do anything yet, just define some providers that do the actual work. But we can now initialize Terraform, which will download those providers.

The aws providers obviously handles the AWS deployments, while the archive provider is used to create zip files that we need to deploy our lambda. All those providers are well-documented in the Terraform homepage.

terraform init
# Initializing the backend...
# Initializing provider plugins...
# [...]
# Terraform has been successfully initialized!

Create the Lambda handler

For now, we use a very simple “Hello world” Lambda handler. You can save it as lambda/src/index.mjs in your project.

export const handler = async (event, context) => {
  return {
    statusCode: 200,
    headers: {
      "Content-Type": "text/plain",
    },
    body: "Hello world!",
  };
};

Now we need to tell Terraform to create a ZIP file that we can use in the Lambda. I like to store the name in a local variable, because we will need it again later.

locals {
  lambda_zip       = "tmp/lambda_function_payload.zip"
}

data "archive_file" "lambda" {
  type        = "zip"
  source_dir  = "lambda/"
  output_path = local.lambda_zip
}

Now, you can run

terraform plan

This determines what needs to be done to reach the state describe in the Terraform files. Since we have not defined any resources yet, it should not do anything. But it already creates the ZIP file in the tmp directory, because that is not a resource. It also creates a number of files.

  • .terraform contains the downloaded providers. Do not commit this directory. You can recreate those files by running terraform init again.
  • .terraform.lock.hcl contains the versions of the providers that are currently used. You should commit this file to your Git repository. It is similar to npms package.lock file.
  • terraform.tfstate contains the current state of the infrastructure, as far as Terraform is aware. Do not commit this file, it may contain sensitive information. If you need to share the state with others, you can use a http-backend to store the state somewhere else, for example in your GitLab instance. But for now, we do not need this.

Now, let’s create a resource.

Create the Lambda resource

This is getting a bit verbose, but we have to create the Lambda, an IAM role and a policy document. Add this to your main.tf file:

resource "aws_lambda_function" "main_lambda" {
  # Use the filename from the local variable
  filename         = local.lambda_zip
  # The name of the lambda function as seen in the AWS console
  function_name    = "test-lambda"
  # `src/index` refers to the filename in the ZIP file. `handler` is the function name that will be called.
  handler          = "src/index.handler"
  # If the ZIP file changes, this hash will change and the lambda will be updated.
  source_code_hash = data.archive_file.lambda.output_base64sha256
  # The memory size of the lambda. This is optional and the default is 128MB. The lambda gets more CPU power with more memory.
  # Note that this also increases the costs, because they are computed by amount of time used times memory size.
  memory_size = "3008"
  # If the lambda runs longer than 5 seconds, if will be terminated.
  timeout = "5"
  # Lambdas are deployed on x86 CPUs by default, but ARM CPUs are cheaper.
  architectures = ["arm64"]
  # The runtime of the lambda. This is a Node.js 20.x runtime.
  runtime = "nodejs20.x"
  # I have not understood completely how this works, but a Lambda requires a role and this is the ARN of that role, which is defined below
  role = aws_iam_role.iam_for_lambda.arn
}

resource "aws_iam_role" "iam_for_test_lambda" {
  name               = "iam_for_test_lambda"
  # The policy document used for the role. This is defined below.
  assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}

data "aws_iam_policy_document" "lambda_assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]
  }
}

My understanding is, that the Lambda now uses the role to execute. And the policy document is required for the role to be able to execute Lambdas. It seems to be a little verbose, but it is representing what is happening in AWS. I am not an expert in this yet, so I might be wrong.

Now, you can run

terraform plan

again and it will show you all the resources, that it is going to create. Some values are (known after apply), because they are generated by AWS. If you are satisfied, you can run

terraform apply

It will show the plan again, and you have to type yes to confirm. You can also use the -auto-approve flag to skip the confirmation, but be aware of the risks. When this is done, you can verify the creation of the Lambda in the AWS console. You can verify that the code was deployed correctly, and you can test the Lambda.

Exposing the Lambda via HTTP

What you cannot do yet, is access the Lambda via HTTP. You can do this with an API Gateway, but this requires a lot of resources, permissions and configurations. When I talked to Jacob about this, he mentioned that you can just use a function_url. This is a relatively new feature, and it is so much easier than deploying an API gateway. You just need this in your main.tf:

# Define a function-url for our function
resource "aws_lambda_function_url" "main_lambda_url" {
  function_name      = aws_lambda_function.main_lambda.function_name
  authorization_type = "NONE"
}

# Output the URL to the console when deploying, so that we can test the lambda
output "lambda_url" {
  value = aws_lambda_function_url.main_lambda_url.function_url
}

Now, when you run terraform apply again, it will show the plan, which no consists only of configuring the URL. Everything else stays the same. When you confirm, you will see the URL in the output. You can now open this URL in your browser and see the “Hello world!” message.

Cleaning up

Since this small experiment is not supposed to last, you probably want to clean up now. Just type

terraform destroy

and everything is gone.

Conclusion

In this post, I have described how to deploy a Lambda function in Terraform. To be honest, most of my code was taken from the Terraform documentation. It is interesting to have the function_url as alternative to the API gateway and I hope it helps to have some working example. I have created a repository with the example in this article at https://gitlab.com/nknapp/terraform-lambda-example.

You can try it out for yourself if you like.