How to integrate Capistrano with Docker for deployment?

11,648

Solution 1

As far as i understood, you are using capistrano on the host , to redeploy the whole application stack, means containers. So you are using capistrano to orchestrate building, container creation and thus deployment.

While you do so you basically, when running cap deploy

  • build the app ( based on the current base you pulled on the host ) - probably even includes gulp/grunt/build tasks
  • then you "package" it into your image using "volume mounts"
  • during that you start / replace the containers

You do so to get a 'nearly' zero downtime deployment.

If you really care about the downtime and about formalising your deployment process that much, you should do it right by using a proper pipeline implementation for

  • packaging / ci
  • deployment / distribution

I do not think capistrano can/should be one of the tools you can use during this strategy. Capistrano is meant for deployment of an application directly on a server using ssh and git as transport. Using cap to build whole images on the target server to then start those as containers, is really over the top, IMHO.

packaging / building

Either use a CI/CD server like jenkins/bamboo/gocd to build an release-image for you application. Assuming only the app is customised in terms of 'release', lets say you have db and app as containers/services, app will include your source-code and will regularly change during releases..

Thus its a CD/CI process to build a new app-image (release) offsite on your CI server. Pulling the source code of your application an packaging it into your image using COPY and then any RUN statement to compile your assets ( npm / gulp / grunt whatever ). That all happens not on the production server, but on the CI/CD agent. Using multistage builds for slim images is encouraged.

Then you push this release-image, lets call this image yourregistry.com/yourapp into your private registry as a new 'version' for deployment.

deployment

with downtime (easy)

To deploy into your production or staging server WITH downtime, you would simply do a docker-composer pull && docker-composer up - this will pull the newer image and then start it in your stack - your app is upgraded. Using tagged images in the release stage would require to change the the docker-compose.yml

The server should of course be able to pull from your private repository.

withou downtime (more effort)

Achieving a zero-downtime deployment you should use the blue-green deployment concept. Thus you add a proxy to your setup and do no longer expose the public port from the app, but rather using this proxy public port. Your current live system might be running on a random port 21231, the proxy is forwarding from 443 to 21231.

We are using random ports to avoid the conflict during deploying the "second" system, covering one of the issue you mentioned.

When redeploying, you will only start a "new" container based on the new app-image in addition (to the old one), it gets a new random port 12312 - if you like, run your integration tests agains 12312 directly ( do not use the proxy ). If you are done and happy, reconfigure the proxy to now forward to 12312 - then remove the old container (21231).

If you like to automate the proxy-reconfiguration, which in detail is out of scope for this question, you can use service-discovery and a registrator which makes random ports much more practical and makes it easy to reconfigure you proxy, let it be nginx/haproxy while they are running. Tools would be, for example.

Solution 2

I don't think Capistrano is the right tool for the job. This was recently discussed in a PR for SSHKit, which underlies Capistrano.

https://github.com/capistrano/sshkit/pull/368

@EugenMayer does a better job of explaining a "normal" way of using Docker.

Share:
11,648

Related videos on Youtube

Michaël Perrin
Author by

Michaël Perrin

I am a web developer focused on providing maintainable web sites and applications. With as strong technical culture, I also enjoy being involved in the whole project design and finding the appropriate solutions. I come up with great ideas for your project from a user standpoint.

Updated on September 20, 2022

Comments

  • Michaël Perrin
    Michaël Perrin over 1 year

    I am not sure my question is relevant as I may try to mix tools (Capistrano and Docker) that should not be mixed.

    I have recently dockerized an application that is deployed with Capistrano. Docker compose is used both for development and staging environments.

    This is how my project looks like (the application files are not shown):

    Capfile
    docker-compose.yml
    docker-compose.staging.yml
    config/
        deploy.rb
        deploy
            staging.rb
    

    The Docker Compose files creates all the necessary containers (Nginx, PHP, MongoDB, Elasticsearch, etc.) to run the app in development or staging environment (hence some specific parameters defined in docker-compose.staging.yml).

    The app is deployed to the staging environment with this command:

    cap staging deploy
    

    The folder architecture on the server is the one of Capistrano:

    current
    releases
        20160912150720
        20160912151003
        20160912153905
    shared
    

    The following command has been run in the current directory of the staging server to instantiate all the necessary containers to run the app:

    docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
    

    So far so good. Things get more complicated on the next deploy: the current symlink will point to a new directory of the releases directory:

    • If deploy.rb defines commands that need to be executed inside containers (like docker-compose exec php composer install for PHP), Docker tells that the containers don't exist yet (because the existing ones were created on the previous release folder).
    • If a docker-compose up -d command is executed in the Capistrano deployment process, I get some errors because of port conflicts (the previous containers still exist).

    Do you have an idea on how to solve this issue? Should I move away from Capistrano and do something different?

    The idea would be to keep the (near) zero-downtime deployment that Capistrano offers with the flexibility of Docker containers (providing several PHP versions for various apps on the same server for instance).

  • Michaël Perrin
    Michaël Perrin over 7 years
    Thank you for your answer! I should probably ditch Capistrano then for the deployment from the host to the server. I already have the app as a container, so this should be alright to build an image for it. As it wouldn’t use volume share (useful for dev env), should I need a specific Dockerfile to add the COPY / RUN statements? I guess I will have to make sure the uploads dirs in the app are still mounted volumes, so that they are not reseted when deploying a new app image. Is there any good tutorial for this whole deployment process set up? (the easy one).
  • Michaël Perrin
    Michaël Perrin over 7 years
    Thanks for your answer, it confirms I should not use Capistrano anymore in this dockerized architecture. I didn't find any good tutorial explaining the whole process (from installing a private registry) to preparing a (sample) app for deployment as an image, but I will try to get more information on this soon.
  • Eugen Mayer
    Eugen Mayer over 7 years
    Exactly, as mentioned above add COPY to deploy the code into your Image during build time
  • Eugen Mayer
    Eugen Mayer over 7 years
    Get the book DevOps 2.0 it covers all those topics, def. worth a read
  • Kevin Monk
    Kevin Monk about 7 years
    Victor Farcic who wrote DevOps 2.0 video youtube.com/watch?v=QhPEOhKXm-s