By Adrian Bridgett | 2020-03-10


Terraform is probably the most widely used tool to look after resources. Which resources? There’s many, here we’ll look at two of the most popular: cloud resources (e.g. object stores such as S3, network policies, databases) and git repositories.

Much of the time these will already exist and so you’ll need to import them into Terraform. This guide covers how to do this by utilising the handy terraformer tool and some best practices around how to do this.

Why bother?

In a phrase - “Infrastructure as Code” (IaC). Catchphrases aren’t a compelling argument though, let’s expand this into a proper set of benefits.

  • Reproducability: if needed we can rebuild the infrastructure, recreate the networks, databases, access lists - whatever you’ve defined in Terraform.
  • Multiplicity: not only can we recreate what we have now, we can create a similar copy. Production, staging, development environments are just (near-)copies of each other. IaC makes this quick with a low rate of errors.
  • Comparability: it’s easy to compare environments (e.g. ensure that the production DB only differs from staging DB in terms of capacity, not security).
  • Standards: by creating opinionated modules you can force (or encourage) best practices. This is more detail later.
  • Pull requests: since this is just code, you can use Pull Requests (PR) to manage it. Proposed changes are easy to review and comment on.
  • Inclusivity: generally it’s safe for everyone to view the Terraform code (passwords are best placed elsewhere such as Vault). Now developers can see what the current setup is, see proposed changes or even to create changes themselves whereas often they don’t have the permissions. Whether it’s an AWS, GCP or GitHub change, it’s all quite readable and no knowledge of the web interface is needed.
  • Security: expanding on inclusivity, no-one needs to to be an administrator any longer. Not even the DevOps team - you can use Atlantis for example to plan (show proposed changes) and apply your Terraform configuration.
  • Others: it’s code with all those benefits. There is a built-in audit trail. In the event of a mistake you can generally roll-back (as long as it wasn’t “delete database”!). There may also be a timeline if you log the terraform apply. I’ve also found it’s easier to find answers - e.g. “How many buckets to we have in all our accounts?”, “Which buckets don’t have bucket policies?”.

When importing existing resources, I use a four stage plan:

  1. Dump
  2. Modularise
  3. Migration
  4. Improvements


Use terraformer to pull down the current set of resources. e.g.:

echo 'provider "aws" {}' > init.tf
terraform init
terraformer import aws --resources=s3

If you look at the documentation you will see that this will pull down both S3 buckets and their policies. Several files will have been created (generally these are split by region):

  • provider.tf - records the provider that was used
  • aws_s3_bucket.tf - S3 buckets
  • aws_s3_bucket_policy.tf - S3 bucket policies
  • terraform.tfstate - imported state


We could just use these resources as is, however I think it’s a good idea to modularise them.

By using opinonated modules we remove flexibility - or put another way we can enforce restrictions - i.e. standards.

Taking the S3 bucket as an example, we may write a module that accepts just two parameters - name and public (defaulting to false). This removes errors and also makes new buckets easy to create - there’s no need to check the 20-50 attributes.

First see what’s used, may be check what’s common values an attribute has:

grep acl * | sort | uniq -n

With this knowledge you can create your module. Write the module from the point of the user - as in the example above, it maybe preferable to combine buckets and their policies into one module. Defaults should be sane - “safe” rather than “most frequent” is my policy (i.e. “private” buckets)

Inevitably, this won’t be straightforward, you’ll discover a large number of annoying differences (tag descriptions for example). What I recommend is do not change anything - make your module initially flexible enough to cope with what you have now. This will mean adding lots of parameters to your module (mark them as deprecated to discourage further use).

You will also need to use the [terraform state mv](https://www.terraform.io/docs/commands/state/mv.html) command to move resources into the module. This is somewhat painful however some basic scripting often removes the drudgery.


Eventually you should be at a point where running terraform plan shows nothing to do. This is perfect - you can now move that entire class of resources under Terraform control safely.

Let everyone know well in advance (including why and when). Double check that no new resources have been added since you last ran terraformer (as a terraform plan will not show up new resources, only delete resources). If there are new resources, add the code and terraform import them.

The hardest part is to stop people managing them manually - removing access is one good solution.


With the resources now managed by Terraform, it’s time to cleanup and add improvements.

  • Temporary variables added for migration should be removed
  • Defaults that were overridden perhaps ought not to be
  • Tags should be used to aid discoverability and to help reduce costs

Each of these changes should result in a PR and an associated plan which makes it easy to see what changes will be made. It’s generally much easier to do these now, one at a time rather than up front - even if the module was temporarily more complex as a result. The fact that we have audited, revertable changes also makes it safer to do these changes now.


My belief is that Kubernets will continue to consume other projects - everything will end up as a chunk of YAML. There are several projects already in this area - aws-service-operator, crossplane and aws-s3-provisioner.


Terraform is a fantastic tool to look after your resources. You can use terraformer to import existing resources more easily. Using opinionated modules helps to enforce standards and cleanup outliers. Import and cleanup and probably best done as a two-step process.