How do you put up a maintenance page for AWS when your instances are behind an ELB?

40,114

Solution 1

The simplest way on AWS is to use Route 53, their DNS service.

You can use the feature of Weighted Round Robin.

"You can use WRR to bring servers into production, perform A/B testing, or balance your traffic across regions or data centers of varying sizes."

More information in AWS documentations on this feature

EDIT: Route 53 recently added a new feature that allows DNS Failover to S3. Check their documentation for more details: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

Solution 2

I realise this is an old question but after facing the same problem today (December 2018), it looks like there is another way to solve this problem.

Earlier this year, AWS introduced support for redirects and fixed responses to Application Load Balancers. In a nutshell:

  • Locate your ELB in the console.
  • View the rules for the appropriate listener.
  • Add a fixed 503 response rule for your application's host name.
  • Optionally provide a text/plain or text/html response (i.e. your maintenance page HTML).
  • Save changes.

Once the rule propagates to the ELB (took ~30 seconds for me), when you try to visit your host in your browser, you'll be shown the 503 maintenance page.

When your deployment completes, simply remove the rule you added.

Solution 3

Came up with another solution that's working great for us. Here are the required steps to get a simple 503 http response:

  1. Replicate your EB environment to create another one, call it something like app-environment-maintenance, for instance.
  2. Change the configuration for autoscaling and set the min and max servers both to zero. This won't cost you any EC2 servers and the environment will turn grey and sit in your list.
  3. Finally, you can use the AWS CLI to now swap the environment CNAME to take your main environment into maintenance mode. For instance:

    aws elasticbeanstalk swap-environment-cnames \
        --profile "$awsProfile" \
        --region "$awsRegion" \
        --output text \
        --source-environment-name app-prod \
        --destination-environment-name app-prod-maintenance
    

This would swap your app-prod environment into maintenance mode. It would cause the ELB to throw a 503 since there aren't any running EC2 instances and then Cloudfront can catch the 503 and return your custom 503 error page, should you wish, as described below.


Bonus configuration for custom error pages using Cloudfront:

We use Cloudfront, as many people will for HTTPS, etc. Cloudfront has error pages. This is a requirement.

  1. Create a new S3 website hosting bucket with your error pages. Consider creating separate files for response codes, 503, etc. See #6 for directory requirements and routes.
  2. Add the S3 bucket to your Cloudfront distribution.
  3. Add a new behavior to your Cloudfront distribution for a route like /error/*.
  4. Setup an error pages in Cloudfront to handle 503 response codes and point it to your S3 bucket route, like /error/503-error.html

Now, when your ELB thorws a 503, your custom error page will be displayed.

And that's it. I know there are quite a few steps to get the custom error pages and I tried a lot of the suggested options out there including Route53, etc. But all of these have issues with how they work with ELBs and Cloudfront, etc.

Note that after you swap the hostnames for the environments, it takes about a minute or so to propagate.

Solution 4

Route53 is not a good solution for this problem. It takes a significant amount of time for DNS entries to expire before the maintenance page shows up (and then it takes that same amount of time before they update after maintenance is complete). I realize that Lambda and CodeDeploy triggers did not exist at the time this question was asked, but I wanted to let others know that Lambda can be used to create a relatively clean solution for this, which I have detailed in a blog post: http://blog.ajhodges.com/2016/04/aws-lambda-setting-temporary.html

The jist of the solution is to subscribe a Lambda function to CodeDeploy events, which replaces your ASG with a micro instance serving a static page in your load balancer during deployments.

Solution 5

As far as I could see, we were in a situation where the above answers didn't apply or weren't ideal.

We have a Rails application running the Puma with Ruby 2.3 running on 64bit Amazon Linux/2.9.0 that seems to come with a (classic) ELB.

So ALB 503 handling wasn't an option.

We also have a variety hardware clients that I wouldn't trust to always respect DNS TTL, so Route53 is risky.

What did seem to work nicely is a secondary port on the nginx that comes with the platform.

I added this as .ebextensions/maintenance.config

files:
  "/etc/nginx/conf.d/maintenance.conf":
    content: |
      server {
        listen 81;
        server_name _ localhost;
        root /var/app/current/public/maintenance;
      }

container_commands:
  restart_nginx:
    command: service nginx restart

And dropped a copy of https://gist.github.com/pitch-gist/2999707 into public/maintenance/index.html

Now to set maintenance I just switch my ELB listeners to point to port 81 instead of the default 80. No extra instances, s3 buckets or waiting for clients to fresh DNS.

Only takes maybe ~15s or so for beanstalk (probably mostly waiting for cloudformation in the back-end) to apply.

Share:
40,114

Related videos on Youtube

BestPractices
Author by

BestPractices

Updated on June 06, 2020

Comments

  • BestPractices
    BestPractices about 4 years

    How do you put up a maintenance page in AWS when you want to deploy new versions of your application behind an ELB? We want to have the ELB route traffic to the maintenance instance while the new auto-scaled instances are coming up, and only "flip over" to the new instances once they're fully up. We use auto-scaling to bring existing instances down and new instances, which have the new code, up.

    The scenario we're trying to avoid is having the ELB serve both traffic to new EC2 instances while also serving up the maintenance page. Since we dont have sticky sessions enabled, we want to prevent the user from being flipped back and forth between the maintenance-mode page and the application deployed in an EC2 instance. We also can't just scale up (say from 2 to 4 instances and then back to 2) to introduce the new instances because the code changes might involve database changes which would be breaking changes for the old code.

  • BestPractices
    BestPractices over 11 years
    Its not clear how this would work -- can you please expand on your answer? (Note, we do have Route 53 set up, in our stack, its just unclear how we'd use weighted round robin to only server some of the EC2 instances at a time)
  • BestPractices
    BestPractices over 11 years
    i.e. our EC2 instances are behind a single ELB
  • Froyke
    Froyke over 11 years
    Guy probably means that you'll have 2 resources in the DNS recordset: one for the ELB and 2nd for the maintenance page. Once in 'maintenance mode' you'll increase the weight of the maintenance page service to a huge value. This means that the other route (your ELB) will get no connections. Once maintenance is over - you reset the weight of the page to zero.
  • BestPractices
    BestPractices over 11 years
    Ah, I should have figured that out on my own. Thanks for clarifying @Froyke!
  • BestPractices
    BestPractices about 8 years
    It does depend if you are using the Alias record in Route 53. Once a health failure occurs, the client will see no delay to swapping over to the other resource (if it is set up with a Route 53 Alias record)
  • Alex
    Alex about 8 years
    This assumes you're using CodeDeploy events.
  • BestPractices
    BestPractices over 5 years
    OP here... thanks for adding this Tom. Once ALB's and this feature came out we moved to do something very similar (but basically this) as our solution.
  • BestPractices
    BestPractices over 5 years
    To anyone reading just this answer, please read the other answers as well since AWS has added many more capabilities since this was posted, including an entirely different option that works well (basically, ALBs can also help achieve this)
  • Brent Bradburn
    Brent Bradburn about 5 years
    A great approach for Elastic Beanstalk! And if you don't need to be fancy, just let the EB load balancer provide your "503 Service Temporarily Unavailable" page directly (by scaling to zero). In this case, you only need steps 1, 2, and 8 (and 8 can alternatively be done in the EB GUI). Another variation would be to deploy a simple custom maintenance-page server on the maintenance environment.
  • Jacob Thomason
    Jacob Thomason about 5 years
    @nobar, you're right. If you don't want anything fancy, like a custom error page to display to users, steps 1, 2, and 8 are all that's needed. I'll update the post so that's made more clear.
  • nmott
    nmott almost 5 years
    How did you deal with the health check? My ELB goes out of service due to health check failure. I manually changed the health check to TCP:81 but it didn't work.
  • Mat Schaffer
    Mat Schaffer almost 5 years
    In my case the app was still up for healthchecks so it wasn't a problem. Your approach should work. If the ELB can't connect to the secondary nginx port I'd check (1) security group rules (can elb egress to 81, does the instance sg allow 81 ingress), (2) is nginx definitely listening (will require restart)
  • mawaldne
    mawaldne over 4 years
    Also using this strategy. Its simple and works great.