Create S3 Bucket using Terraform

Updated on 17th June 2022. Tested build with Terraform version 1.2.3 and the latest AWS Provider v4.19.0. The latest provider has some changes related to adding lifecycle configuration.


In an earlier post, I provided some information on using AWS Encryption SDK and in that post, I created a KMS key using the AWS CLI. In this post I am going to create the KMS key and S3 bucket using Terraform, which you can then use to store objects which are encrypted using Server Side Encryption.

The level of encryption you need depends on your individual case. You may want to just rely on Server Side Encryption, or you may want to encrypt using a Data Encryption Key.


  • Install Terraform on your server or laptop.
  • Install AWS CLI on your server or laptop.
  • In my demonstration I have already configured an AWS profile called ‘automation’. You can either create a profile or assume a role or simply modify the file and add your AWS credentials. See Access.

Note: Published a new post to Create Storage Bucket in Google Cloud using Terraform.

Sample Code

The Terraform code is split into multiple files, I will discuss the code in detail later in this post.

Here are some facts about what this Terraform example does.

Note: Running this demo will incur some cost. You can lower your cost by cleaning up and removing all resources after you are done. You may have to wait a bit when you create the resource. I have noticed that if you run an ‘apply’ and immediately follow it by a ‘destroy’, it fails. This is probably due to S3 propagation. Wait a while and then retry the ‘delete’.

  • The KMS Customer Managed Key I am creating has a smaller delete window and it will have an alias called ‘mycmk’.
  • An S3 bucket is created ‘skbali-demo-area’. You should use a unique name as S3 names are global. Two buckets cannot have the same name.
  • The S3 bucket has Server Side Encryption enabled and uses the Key we created ‘mycmk’.
  • S3 bucket also has a policy attached to it which allows the current user access to it.
  • There is a life cycle policy attached, which deletes objects under demo/ if they are older than 7 days.
  • In addition to the above, the bucket is set to disable public access.

Let us review the code a bit more in detail.

The file allows Terraform to pull the current authenticated user information and account id. I use this information to build the policy that is attached to the S3 bucket being created.

variable "kms_key_alias" {
  type    = string
  default = "mycmk"
variable "kms_key_description" {
  type    = string
  default = "s3key"
variable "kms_deletion_window_in_days" {
  type    = number
  default = 7

variable "tags" {
  type = map(string)
  default = {
    "Purpose"    = "Demo",
    "CostCenter" = "infra"

variable "s3_bucket" {
  type    = string
  default = "skbali-demo-area"

Here, I am defining an alias for my KMS key and reducing the delete window for the key. I am setting the name for my S3 bucket and some tags that can be applied to it.

The code that puts all of it together is in All the AWS resources are being created here. Each resource that will be created starts with the ‘resource’ line.

The file specifies what information will be displayed by Terraform when it completes.

The AWS Provider version 4 has new resources for S3 bucket. I have updated the code to reflect the new resources.

  • Resource block for the S3 bucket, which creates the bucket
  • New resource aws_s3_bucket_acl to make bucket private.
  • New resource aws_s3_bucket_server_side_encryption_configuration to set encryption.
  • New resource aws_s3_bucket_lifecycle_configuration to setup the lifecycle policy.
  • Resource block for the policy attached to it.

If you need any additional clarification on it, please feel free to comment below and ask.


To run our demo and create the resources, we have to first initialize our Terraform environment.

$ terraform init
Terraform v1.2.3
on linux_amd64
+ provider v4.19.0

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/aws v4.19.0...
- Installed hashicorp/aws v4.19.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

The ‘terraform init’ is an important step where Terraform scans your files and determines which is the cloud provider and then it installs the plugins needed by Terraform to run.

I also ran the version check to show what version I used in my demo.

 $ terraform plan
data.aws_caller_identity.current: Reading...
data.aws_caller_identity.current: Read complete after 0s [id=743325541661]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_kms_alias.kms_key_alias will be created
  + resource "aws_kms_alias" "kms_key_alias" {
      + arn            = (known after apply)
      + id             = (known after apply)
      + name           = "alias/mycmk"
      + name_prefix    = (known after apply)
      + target_key_arn = (known after apply)
      + target_key_id  = (known after apply)

............... More Output ..........

   + resource "aws_s3_bucket_server_side_encryption_configuration" "s3_bucket_encryption" {
      + bucket = (known after apply)
      + id     = (known after apply)

      + rule {
          + apply_server_side_encryption_by_default {
              + kms_master_key_id = (known after apply)
              + sse_algorithm     = "aws:kms"

Plan: 8 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + bucket_name        = "skbali-demo-area"
  + kms_key_alias_name = "alias/mycmk"

I always make it a habit to run ‘terraform plan’. This will give you a clear idea of what would happen when you run an ‘apply’. You can see in the abbreviated output, what Terraform thinks will happen.

Five new resources will be created for us, this includes the KMS key, S3 bucket and associated policies.

terraform apply

aws_kms_key.key: Creating...
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 0s [id=skbali-demo-area]
aws_s3_bucket_policy.demo-policy: Creating...
aws_s3_bucket_acl.s3_bucket_acl: Creating...
aws_s3_bucket_public_access_block.bucket: Creating...
aws_s3_bucket_lifecycle_configuration.s3_lifecycle: Creating...
aws_s3_bucket_policy.demo-policy: Creation complete after 1s [id=skbali-demo-area]
aws_s3_bucket_acl.s3_bucket_acl: Creation complete after 1s [id=skbali-demo-area,private]
aws_s3_bucket_public_access_block.bucket: Creation complete after 1s [id=skbali-demo-area]
aws_kms_key.key: Creation complete after 4s [id=1f4b7321-e8a9-4198-a0f4-9040470e09f1]
aws_kms_alias.kms_key_alias: Creating...
aws_s3_bucket_server_side_encryption_configuration.s3_bucket_encryption: Creating...
aws_kms_alias.kms_key_alias: Creation complete after 0s [id=alias/mycmk]
aws_s3_bucket_server_side_encryption_configuration.s3_bucket_encryption: Creation complete after 0s [id=skbali-demo-area]
aws_s3_bucket_lifecycle_configuration.s3_lifecycle: Still creating... [10s elapsed]
aws_s3_bucket_lifecycle_configuration.s3_lifecycle: Still creating... [21s elapsed]
aws_s3_bucket_lifecycle_configuration.s3_lifecycle: Still creating... [31s elapsed]
aws_s3_bucket_lifecycle_configuration.s3_lifecycle: Creation complete after 32s [id=skbali-demo-area]

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.


aws_kms_key = "1f4b7321-e8a9-0000-a0f4-90xxxxxxxxxx"
bucket_name = "skbali-demo-area"
kms_key_alias_name = "alias/mycmk"

If you are ok with the plan, go ahead and run ‘terraform apply’. Depending on the resources being created, it can take from few seconds to minutes. In our case, it should not take more than a few seconds.

My plan output prints the information I wanted to display from my file.

dd if=/dev/urandom of=./xyz bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB, 9.8 MiB) copied, 0.0688974 s, 149 MB/s

ls -la xyz
-rw-rw-r-- 1 sbali sbali 10240000 Jun 17 09:50 xyz

aws s3 cp xyz s3://skbali-demo-area/logs/xyz --profile automation
upload: ./xyz to s3://skbali-demo-area/logs/xyz        

Next step is to create a random file and upload it to S3 and examine its meta data.

The output below clearly shows that the s3 bucket is using AWS-KMS for encryption using our key.

aws s3api get-bucket-encryption --bucket skbali-demo-area --profile automation
    "ServerSideEncryptionConfiguration": {
        "Rules": [
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "aws:kms",
                    "KMSMasterKeyID": "1f4b7321-e8a9-0000-a0f4-90xxxxxxxxxx"
                "BucketKeyEnabled": false


Using Terraform, it was quite easy to setup a KMS key, S3 bucket with Server Side Encryption enabled.

Let me know by commenting below, if you need any clarification with this demo. Also if no longer required, you can run ‘terraform destroy’ to remove all the resources that were created.


To be able to run a terraform destroy, the bucket must be empty. In the demo above I copied an object to the S3 bucket, so it is not empty.

Terraform can destroy all objects in the bucket is you add the following

force_destroy = true

Be very careful with this option. There is no undo. If you run a terraform destroy and proceed with it, all objects in the S3 bucket will be deleted.

Further Reading


  1. thanks for the post, I need to know few things:
    1. is it better to use tfvars file instead of file and interpolate the variables ?
    2. I did not understand what is “aws_caller_identity”, how is it being used
    3. Can we define the bucket policy in another file and refer that json file, instead of putting policy in
    4. Can you provide a tutorial on hosting a static website on s3 using cloudfront, creating OAI, creating IAM role/user etc. ?
    Thanks again

    • 1. TBH – I am not sure what is better. Will need to research more on it.
      2. If you look at line 60 and 69 of where I refer “${data.aws_caller_identity.current.arn}”. I want to give S3 access to the current user. aws_caller_identity provides me the ARN of the IAM user running this demo.
      3. See You could define policy in another file. But I wanted it to be dynamic as I want to provide bucket access to the user running the demo. If the policy was in another file, I might have hard coded the user.
      4. Hopefully I will get to try it out.

      Thanks for taking the time to read, point out my mistake and trying the demo code!

  2. I declared variables and it works fine in that case. I want to know if we need to create any cmk first and then refer in the variables as “mycmk” ? didn’t get that part as what is mycmk, do we need to create that in AWS console first and then refer that? any way to create it by terraform ?

    • The sample code creates the customer managed key ‘cmk’ and then uses it for the S3 bucket. You do not have to do it in the console.

      ‘mycmk’ is the alias for the key that is created. It is ‘my customer managed key’. I could have named it s3cmk to identify it is a customer managed key used for s3.

    • I just did a test and it works for me. A KMS key describe shows when it was created and the scheduled deletion. I set my window to 8 days.

      “CreationDate”: 1587121410.154,
      “Enabled”: false,
      “Description”: “mycmk”,
      “KeyUsage”: “ENCRYPT_DECRYPT”,
      “KeyState”: “PendingDeletion”,
      “DeletionDate”: 1587859200.0,

Leave a Reply