Restore a backup file on AWS RDS SQL Server from S3 bucket

16,348

Solution 1

The problem is with the backup(.bak) files, when they are not transferred properly or missing some meta data information, this error shows up. Initially i had transferred the files from one AWS account (ap-northeast-1) to another AWS account (us-east-1) in different regions using the command

aws s3 sync s3://account1bucket s3://account2bucket --source-region ap-northeast-1

But looks like that did not do the job properly. Now, i have manually downloaded the files from the Source S3 bucket and uploaded to the destination S3 bucket, it started working now.

Probably this might fix some of the issues for transferring the bucket contents using the AWS CLI: https://github.com/aws/aws-cli/pull/1122

Solution 2

Also another point to note is that the SQL backup filename is case sensitive.

exec msdb.dbo.rds_restore_database @restore_db_name='yourdatabasename', @s3_arn_to_restore_from='arn:aws:s3:::/<CASESENSITIVE_SQL_BACKUPNAME.BAK>';

Solution 3

Having the best part of a day reading posts (including this one), and double checking all settings to find out that the issue was a stupid one I thought I'd share my findings.

Access denied is a bit of a generic error message for when the restore procedure cannot access or read the backup file.

  • It can be down to bucket / IAM permissions
  • It can be down to typo's (a few people have posted that the file name is case sensitive)
  • It can be down to file corruption
  • It can be down to file / server incompatibility, e.g. version or as was in my case a compressed back-up where the chosen server did not support compression

Solution 4

Use --acl bucket-owner-full-control when copying from another account.

If you've already copied it, you must grant access after object is added to the bucket:

Grant access after the object is added to the bucket If the object is already in a bucket in another account, the object owner can grant the bucket owner access with a put-object-acl command:

aws s3api put-object-acl --bucket destination_DOC-EXAMPLE-BUCKET --key keyname --acl bucket-owner-full-control

Source: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/

Solution 5

I had this problem when transferring database instances between accounts. I created an S3 bucket in the Destination account, and gave write permissions to the Source account.

When I went to the Source database and backed it up to the S3 bucket then everything worked fine, but when I tried to restore that backup in the Destination database I got the error:

Error making request with Error Code Forbidden and Http Status Code Forbidden. No further error information was returned by the service.

After finding this thread and seeing the clues about permissions (including making backup files publicly readable!) I discovered that even though the backup had been written to a bucket owned by the Destination account, the backup file in there was owned by the Source account and that the Destination account didn't have read permissions for it! This was unexpected to me. To put it another way:

The owner of a bucket can give write permissions to another account. That account can then write files that are not readable by the owner of the bucket.

Looking back now it completely makes sense, though I hadn't expected it as 'default' behaviour!

I'm sure that I could have fixed the permissions with considered use of the AWS CLI, however in my case it was much simpler to:

  1. Create a bucket in the Source account
  2. Give read permissions to the Destination account
  3. Backup the Source database to the bucket, giving you a file in a bucket that are both owned by the Source account, but readable by the Destination account
  4. Restore the backup into the Destination account's database
Share:
16,348
hakuna
Author by

hakuna

FullStack developer &amp; DevSecOps engineer primarily works on Microsoft .Net,Angular2-5,AWS/Azure,HTML,CSS,SQL Server, Oracle, setting up infrastructure, configuration and deployment. Worked with various clients, business domains across multiple large/small scale projects leading/managing the teams with in US and internationally.

Updated on December 02, 2022

Comments

  • hakuna
    hakuna over 1 year

    I am able to successfully get the database backup from SQL Server instance on AWS with the following command to an S3 bucket:

    -- Backup databases - MyDB
    exec msdb.dbo.rds_backup_database 
           @source_db_name='MyDB',
           @s3_arn_to_backup_to='arn:aws:s3:::mybucket/MyDB.bak',
           @overwrite_S3_backup_file=1;
    

    While Restoring the same backup to a different SQL server instance within the same region using the following command, it's not working:

        -- restore databases - MyDB
    EXEC msdb.dbo.rds_restore_database 
            @restore_db_name='MyDB', 
            @s3_arn_to_restore_from='arn:aws:s3:::mybucket/MyDB.bak';
    

    i am getting the following error when checked the task status with exec msdb.dbo.rds_task_status @db_name='MyDB'

    [2017-05-19 19:22:22.127] Aborted the task because of a task failure or a concurrent RESTORE_DB request.
    [2017-05-19 19:22:22.150] Error making the request with Error Code Forbidden and Http Status Code Forbidden. No further error information was returned by the service.
    

    I have done this backup and restore on multiple DB instances till now and never seen this kind of error. The S3 bucket and the SQL Server instance where the .bak file needs to be restored are in the same region, the assigned option group role has all the required permissions set on the S3 bucket as well. Below is the Options group Role policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation"
                ],
                "Resource": [
                    "arn:aws:s3:::mybucket"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObjectMetaData",
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:ListMultipartUploadParts",
                    "s3:AbortMultipartUpload"
                ],
                "Resource": [
                    "arn:aws:s3:::mybucket/*"
                 ]
            }
        ]
    }
    
  • Aniket Betkikar
    Aniket Betkikar almost 6 years
    All I had to do was make the backup db file public and it worked.
  • kemsky
    kemsky about 5 years
    public access for backup?? sounds like disaster
  • hakuna
    hakuna about 5 years
    That was a quick fix, later i added appropriate permissions to the bucket
  • Gavin Mannion
    Gavin Mannion almost 5 years
    There must be a better fix, I did this now as well to get my backup but it seems ludicrous to have to open it up to the public to write the backup then remove the permissions
  • jackofallcode
    jackofallcode about 3 years
    Please do not make your backup files public! What resolved for us is that we were missing the ownership switch on the IAM role from the source account. See this link.