Terraform Managed AMIs With Packer

Maintain current AWS AMIs based on source AMI and userdata updates and customize the rebuild using HashiCorp's Packer and Terraform.

Mike Horwath
January 15, 2021

This article was originally published on Geek and I, January 14, 2021, and has been republished with the author's permission.

I have been working with a friend on learning Terraform to manage his new and growing, AWS environment. One of the challenges I gave him was to use Terraform to manage the AMI updates that Packer creates or to initiate an update if the source AMI is newer than the current state.

Terraform doesn’t have Packer provider so this requires using other resources built into Terraform to accomplish a working and trackable state.

Problem Statement

Maintain current AMIs based on source AMI and userdata updates and rebuild the AMI as needed when the source, or gold image, AMI is updated, or you update your userdata, using Packer to accomplish customization.

  • Figure out our source AMI via data; lookup(s)
  • If source ami-id has changed, then initiate new AMI build
  • If userdata has changed, then initiate new AMI build
  • If source ami-id and userdata have not changed, do nothing (idempotent!)

Terraform built-in resources

I accomplished this by abusing the null_resource provider and local-exec provisioner.

First, let’s go find the AMI we need as the source:

data "aws_ami" "ubuntu" {
  most_recent = true
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  # Canonical
  owners = [

This returns an ami-id of ami-0c007ac192ba0744b (as of 20210114 in AWS region us-east-2). These AMIs are updated by Canonical periodically, and there will be a new ami-id.

Now that we have an ami-id, we can add that as a trigger to execute changes to null_resource. This has a second trigger to check on the userdata file that will be used to do customization:

resource "null_resource" "build_custom_ami" {
  triggers = {
    aws_ami_id      = data.aws_ami.ubuntu.id
    sha256_userdata = filesha256("deploy/packer-customize.sh")
  provisioner "local-exec" {
    environment = {
      VAR_AWS_REGION = var.aws_region
      VAR_AWS_AMI_ID = data.aws_ami.ubuntu.id
    command = <<EOF
    set -ex;
    packer validate \
      -var "aws_region=$VAR_AWS_REGION" \
    packer build \
      -var "aws_region=$VAR_AWS_REGION" \

So basically I have the following directory structure that is relevant. You will probably also have backend resource, perhaps some requirements, etc.

-> packer-configs/
---> custom_ami.json
-> deploy/
---> packer-customize.sh

Implementation via Jenkins or other CI/CD systems is left to you to figure out.

What are the variables used for in local-exec?

I have items running in multiple regions and each region has its own AMIs (and resulting ami-ids). The above has been pared down a bit for brevity.

You can use the aws provider to connect to multiple regions concurrently:

### per region provider info using provider listings

provider "aws" {

alias  = "region-us-east-1"

region = "us-east-1"


provider "aws" {

alias  = "region-us-east-2"

region = "us-east-2"


provider "aws" {

alias  = "region-us-west-1"

region = "us-west-1"


provider "aws" {

alias  = "region-us-west-2"

region = "us-west-2"


Of course you can do other things to make it even more dynamic using data calls for aws_caller_identity within the region you are working against and applying it programmatically but I’ll leave that to you for now.