No targets available when trying to set alias target from Route 53 to S3

21,365

Solution 1

The A-record alias you create has to be the same as the name of the bucket, because virtual hosting of buckets in S3 requires that the Host: header sent by the browser match the bucket name. There's not really another practical way in which virtual hosting of buckets could be accomplished... the bucket has to be identified by some mechanism, and that mechanism is the http headers.

In order to create an alias to a bucket inside the "example.com" domain, the bucket name is going to have to also be a hostname you can legally declare within that domain... the Route 53 A-Record "testbucket.example.com," for example, can only be aliased to a bucket called "testbucket.example.com" ... and no other bucket.

In your question, you're breaking this constraint... but you can only create an alias to a bucket named "simples3websitetest.com" inside of (and at the apex of) the "simples3websitetest.com" domain.

This is by design, and not exactly a limitation of Route 53 nor of S3. They're only preventing you from doing something that can't possibly work. Web servers are unaware of any aliasing or CNAMEs or anything else done in the DNS -- they only receive the original hostname that the browser believes it is trying to connect to, in the http headers sent by the browser ... and S3 uses this information to identify the name of the bucket to which the virtual hosted request applies.

Amazon S3 requires that you give your bucket the same name as your domain. This is so that Amazon S3 can properly resolve the host headers sent by web browsers when a user requests content from your website. Therefore, we recommend that you create your buckets for your website in Amazon S3 before you pay to register your domain name.

http://docs.aws.amazon.com/gettingstarted/latest/swh/getting-started-create-bucket.html#bucket-requirements

Note, however, that this restriction only applies when you are not using CloudFront in front of your bucket.

With CloudFront, there is more flexibility, because the Host: header can be rewritten (by CloudFront itself) before the request is passed through to S3. You configure the "origin host" in your CloudFront distribution as your-bucket.s3-website-xx-yyyy-n.amazonaws.com where xx-yyyy-n is the AWS region of S3 where your bucket was created. This endpoint is shown in the S3 console for each bucket.

Solution 2

Assume you have a hosted zone abc.com. and you create a bucket abc.com (which doesnt show up in the list in routes aliases) - you may think it's the . after the name - which you can't name the buckets with

Try this as well. Because the first time I created the bucket with the correct name and still didn't work. Believe me I have OCD so I didn't miss a fullstop or a comma.

  1. Create another hosted zone with the same name abc.com
  2. You will now see 2 of the same hosted zone (abc.com. and abc.com.)
  3. Delete the new one
  4. Go back to the old hosted zone abc.com
  5. You might be able to see the s3 endpoints coming up - this may be an issue in Route53

This worked for me trying out almost everything - Some suggestions I see is to logout and login for some sort of cache clear - not sure

Share:
21,365

Related videos on Youtube

Amir Zucker
Author by

Amir Zucker

Updated on September 18, 2022

Comments

  • Amir Zucker
    Amir Zucker almost 2 years

    I'm trying to setup a simple Amazon AWS S3 based website, as explained here.

    I've setup the S3 bucket (simples3websitetest.com), gave it the (hopefully) right permissions:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AddPerm",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "*"
                },
                "Action": [
                    "s3:GetObject"
                ],
                "Resource": [
                    "arn:aws:s3:::simples3websitetest.com/*"
                ]
            }
        ]
    }
    

    I uploaded index.html, setup website access, and it is accessible via http://simples3websitetest.com.s3-website-us-west-2.amazonaws.com/index.html

    So far so good, now I want to setup Amazon Route53 access and this is where I got stuck.

    I've setup a hosted zone on a domain I own (resourcesbox.net), and clicked "create record set", and got to the "setup alias" step, but I get "No targets available" under S3 website endpoints when I try to set the alias target.

    What did I miss??

    • Alberto Spelta
      Alberto Spelta about 10 years
      Starting from October 2012 Amazon introduced a function to handle redirects (HTTP 301) for S3 buckets. You can read my previous response here. stackoverflow.com/a/24218895/1160780
  • Amir Zucker
    Amir Zucker over 10 years
    This was indeed the problem, I created a bucket called resourcesbox.net and it did show up. Thank you! Quick follow up question: What this means is that if I want to have different buckets for that domain, I must have subdomains to suit each bucket right? There's no way around it?
  • Michael - sqlbot
    Michael - sqlbot over 10 years
    I'm not exactly sure what you mean by "I must have subdomains." You need to create an A record in Route 53 with a hostname matching each bucket that you want to use to host a web site in S3, yes.
  • Michael - sqlbot
    Michael - sqlbot over 9 years
    @oberstet this question is about Route 53 alias records pointed to S3 buckets with web site hosting enabled, which causes the DNS to resolve to the web site endpoint, not the REST endpoint. The web site endpoints don't support SSL at all; only the REST endpoints do. Also, all wildcard certs only support a maximum of one * and it can appear only in the leftmost hostname component, so that isn't really an S3 limitation.
  • oberstet
    oberstet over 9 years
    @Michael-sqlbot Right. RFC6125 6.4.3.2 disallows a single, left-most * to match periods (e.g., *.example.com would match foo.example.com but not bar.foo.example.com), but where does the RFC say that a wildcard cert for *.*.example.com (presumably then matching foo.example.com and bar.foo.example.com) is disallowed? Probably I've overlooked it, could you point me to? In any case, this is causing trouble: github.com/boto/boto/issues/2836
  • Michael - sqlbot
    Michael - sqlbot over 9 years
    @oberstet 6.4.3.1 The client SHOULD NOT attempt to match a presented identifier in which the wildcard character comprises a label other than the left-most label. So, there's no such thing as a multi-tiered wildcard. At any rate, your boto issue is a matter of the "calling format" option being apparently implemented incorrectly. Every bucket can be accessed over https with bucket name as the first path element under the S3 URL for the bucket's correct region e.g. https://s3-us-west-2.amazonaws.com/my-bucket.with-dots.in-us‌​-west-2/key. Wrong regional endpoint = redirect error.
  • jwadsack
    jwadsack over 9 years
    I ran into this same issue. I had to sign out and back in to get the list to populate.
  • James Griffin
    James Griffin about 9 years
    I ran into this issue with the correct bucket name and everything setup correctly, signing out and back in did not populate the s3 target menu. The 'fix' is simple, just enter the S3 regional url on its own, in this case "s3-website-us-west-2.amazonaws.com".
  • Martin Lyne
    Martin Lyne over 8 years
    Seems AWS don't really help you with this, going from S3 settings and Route53 settings it looks like you an just enable web hosting on the bucket and point the record where to go, so thanks for this answer. Shame people can take other peoples domain names for their buckets easily too.
  • Michael - sqlbot
    Michael - sqlbot over 8 years
    @MartinLyne thanks. I've added a reference to the S3 documentation, about the bucket name and domain name needing to be the same, as well as mentioning the workaround for an already-taken bucket name, using CloudFront. In the us-east-1 and us-west-2 regions, and possibly others, the cost of using CloudFront is negligible and can potentially even save a little, since CF downloads are $0.005/GB cheaper on bandwidth than S3 direct at some edge locations.
  • Martin Lyne
    Martin Lyne over 8 years
    @Michael-sqlbot oh, interesting, thanks!
  • Zolbayar
    Zolbayar over 4 years
    Thanks! Just logging out and logging in did the trick for me!