How to solve Error loading state: AccessDenied: Access Denied status code: 403 when trying to use s3 for terraform backend?

26,634

Solution 1

I encountered this before. Following are the steps that will help you overcome that error-

  1. Delete the .terraform directory
  2. Place the access_key and secret_key under the backend block. like below given code
  3. Run terraform init
  backend "s3" {
    bucket = "great-name-terraform-state-2"
    key    = "global/s3/terraform.tfstate"
    region = "eu-central-1"
    access_key = "<access-key>"
    secret_key = "<secret-key>"
  }
}

The error should be gone.

Solution 2

I also faced the same issue. Then I manually remove the state file from my local system. You can find the terraform.tfstate file under .terraform/ directory and run init again. in case you had multiple profiles configured in aws cli. not mentioning profile under aws provider configuration will make terraform use default profile.

Solution 3

I knew that my credentials were fine by running terraform init on other projects that shared the same S3 bucket for their Terraform backend.

What worked for me:

rm -rf .terraform/

Solution 4

I googled arround but nothing help. Hope this will solve your problem. My case: I was migrating the state from local to AWS S3 bucket.

  1. Comment out terraform scope
provider "aws" {
  region = "region"
  access_key = "key" 
  secret_key = "secret_key"
}

#terraform {
#  backend "s3" {
#    # Replace this with your bucket name!
#    bucket         = "great-name-terraform-state-2"
#    key            = "global/s3/terraform.tfstate"
#    region         = "eu-central-1"
#    # Replace this with your DynamoDB table name!
#    dynamodb_table = "great-name-locks-2"
#    encrypt        = true
#  }
#}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "great-name-terraform-state-2"
  # Enable versioning so we can see the full revision history of our
  # state files
  versioning {
    enabled = true
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "great-name-locks-2"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
    }
}
  1. Run
terraform init
terraform plan -out test.tfplan
terraform apply "test.tfplan"

to create resources (S3 bucket and DynamoDb)

  1. Then uncomment terraform scope, run
AWS_PROFILE=REPLACE_IT_WITH_YOUR  TF_LOG=DEBUG   terraform init

If you get errors, just search for X-Amz-Bucket-Region:

-----------------------------------------------------
2020/08/14 15:54:38 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 14 Aug 2020 08:54:37 GMT
Server: AmazonS3
X-Amz-Bucket-Region: eu-central-1
X-Amz-Id-2: REMOVED
X-Amz-Request-Id: REMOVED

Copy the value of X-Amz-Bucket-Region, my case is eu-central-1.

  1. Change your region in terraform backend configuration to the corresponding value.
terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}

Solution 5

For better security, you may use shared_credentials_file and profile like so;

provider "aws" {
  region = "region"
  shared_credentials_file = "$HOME/.aws/credentials # default
  profile = "default" # you may change to desired profile
}

terraform {
  backend "s3" {
    profile = "default" # change to desired profile
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}
Share:
26,634
helpper
Author by

helpper

Updated on February 18, 2022

Comments

  • helpper
    helpper about 2 years

    My simple terraform file is:

    provider "aws" {
      region = "region"
      access_key = "key" 
      secret_key = "secret_key"
    }
    
    terraform {
      backend "s3" {
        # Replace this with your bucket name!
        bucket         = "great-name-terraform-state-2"
        key            = "global/s3/terraform.tfstate"
        region         = "eu-central-1"
        # Replace this with your DynamoDB table name!
        dynamodb_table = "great-name-locks-2"
        encrypt        = true
      }
    }
    
    resource "aws_s3_bucket" "terraform_state" {
      bucket = "great-name-terraform-state-2"
      # Enable versioning so we can see the full revision history of our
      # state files
      versioning {
        enabled = true
      }
      server_side_encryption_configuration {
        rule {
          apply_server_side_encryption_by_default {
            sse_algorithm = "AES256"
          }
        }
      }
    }
    
    resource "aws_dynamodb_table" "terraform_locks" {
      name         = "great-name-locks-2"
      billing_mode = "PAY_PER_REQUEST"
      hash_key     = "LockID"
      attribute {
        name = "LockID"
        type = "S"
        }
    }
    

    All I am trying to do is to replace my backend from local to be store at S3. I am doing the following:

    1. terraform init ( when the terrafrom{} block is comment )

    2. terrafrom apply - I can see in my AWS that the bucket was created and the Dynmpo table as well.

    3. now I am un commenting the terrafrom block and again terraform init and i get the following error:

    Error loading state:
        AccessDenied: Access Denied
            status code: 403, request id: xxx, host id: xxxx
    

    My IAM has administer access I am using Terraform v0.12.24 as one can observe, I am directly writing my AWS key and secret in the file

    What am i doing wrong?

    I appreciate any help!

  • helpper
    helpper almost 4 years
    created another project to use the previous bucket and the dynamo table, made myself the folder system as it is in the key when made terraform init got Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Error refreshing state: AccessDenied: Access Denied status code: 403, request id: xxx, host id: xxx
  • DerPauli
    DerPauli almost 4 years
    In the most cases it is easier to just create it by hand, especially when you don't have to do it often. What I meant by "create another TF project" is: Image you are working in a DevOps Team and you have to create new dynamic terraform projects on the fly to provide to your team. Then, instead of creating the state bucket manually, you could write a simple terraform file which has a local state and provisions an s3 bucket and a dynamo db table. Afterwards you take these two components and reference them by name in your terraform { backend "s3" {} } block.
  • DerPauli
    DerPauli almost 4 years
    I would be interested to see what output you get when you create the bucket by hand.
  • helpper
    helpper almost 4 years
    sorry for the late replay, nothing works, I try to make the bucket and table from a different project - didnt work . as well tried to create manually always the same Error
  • DerPauli
    DerPauli almost 4 years
    You can try to debug the terraform init command with: TF_LOG=DEBUG terraform init. Maybe its worth having a look at your ~/.aws/credentials file (or your environment variables echo $AWS_ACCESS_KEY_ID ,echo $AWS_SECRET_ACCESS_KEY and echo $AWS_SESSION_TOKEN ) if there are some different credentials which may override your set credentials.
  • DerPauli
    DerPauli almost 4 years
    The best bet would be to look at the TF_LOG=DEBUG. Maybe also have a look at this github issue for more information.
  • Juancho
    Juancho about 3 years
    You can also set the AWS profile name instead of the access and secret keys.
  • eatsfood
    eatsfood about 3 years
    Best practices would not advise for you to store sensitive material like your access and secret keys in your Terraform files. This is especially true if you also use a code repository like Github. As @Juancho points out, all you need to do is include a line in the backend like this: profile = your_profile_name_from_the_aws_credentials_file Also, deleting your .terraform directory is entirely unnecessary.
  • Juancho
    Juancho about 3 years
    Aditionallly you can use shared_credentials_file to point to a different credentials file on other location than ~/.aws/credentials if needed.
  • Benjamin
    Benjamin almost 3 years
    Setting env var AWS_PROFILE explicitly did the trick! 🎉
  • Edeph
    Edeph over 2 years
    I confirm that the only thing needed is to add the profile property. Don't delete the .terraform dir and ideally don't put the access_key or secret_key in there, use the profile instead.