Run Grunt / Gulp inside Docker container or outside?
Solution 1
The only difference I see is that you can reproduce a full grunt installation in the second approach.
With the first one, you depend on a local action which might be done differently, on different environments.
A container should be based in an image that can be reproduced easily instead of depending on an host folder which contains "what is needed" (not knowing how that part has been done)
If the build environment overhead which comes with the installation is too much for a grunt image, you can:
- create an image "
app.tar
" dedicated for the installation (I did that for Apache, that I had to recompile, creating a deb package in a shared volume).
In your case, you can create an archive ('tar') of the app installed. -
creating a container from a base image, using the volume from that first container
docker run --it --name=app.inst --volumes-from=app.tar ubuntu untar /shared/path/app.tar docker commit app.inst app
Then end result is an image with the app present on its filesystem.
This is a mix between your approach 1 and 2.
Solution 2
I'd like to suggest a third approach that I have done for a static generated site, the separate build image.
In this approach, your main Dockerfile
(the one in project root) becomes a build and development image, basically doing everything in the second approach. However, you override the CMD
at run time, which is to tar up the built dist
folder into a dist.tar
or similar.
Then, you have another folder (something like image
) that has a Dockerfile
. The role of this image is only to serve up the dist.tar
contents. So we do a docker cp <container_id_from_tar_run> /dist
. Then the Dockerfile
just installs our web server and has a ADD dist.tar /var/www
.
The abstract is something like:
- Build the
builder
Docker image (which gets you a working environment without webserver). At thist point, the application is built. We could run the container in development withgrunt serve
or whatever the command is to start our built in development server. - Instead of running the server, we override the default command to tar up our dist folder. Something like
tar -cf /dist.tar /myapp/dist
. - We now have a temporary container with a
/dist.tar
artifact. Copy it to your actual deployment Docker folder we calledimage
usingdocker cp <container_id_from_tar_run> /dist.tar ./image/
. - Now, we can build the small Docker image without all our development dependencies with
docker build ./image
.
I like this approach because it is still all Docker. All the commands in this approach are Docker commands and you can really slim down the actual image you end up deploying.
If you want to check out an image with this approach in action, check out https://github.com/gliderlabs/docker-alpine which uses a builder image (in the builder folder) to build tar.gz files that then get copied to their respective Dockerfile
folder.
Solution 3
A variation of the solution 1 is to have a "parent -> child" that makes the build of the project really fast. I would have dockerfile like:
FROM node
RUN mkdir app
COPY dist/package.json app/package.json
WORKDIR app
RUN npm install
This will handle the installation of the node dependencies, and have another dockerfile that will handle the application "installation" like:
FROM image-with-dependencies:v1
ENV NODE_ENV=prod
EXPOSE 9001
COPY dist .
ENTRYPOINT ["npm", "start"]
with this you can continue your development and the "build" of the docker image is going to be faster of what it would be if you required to "re-install" the node dependencies. If you install new dependencies on node, just re-build the dependencies image.
I hope this helps someone.
Regards
santi
Updated on June 11, 2022Comments
-
santi almost 2 years
I'm trying to identify a good practice for the build process of a nodejs app using grunt/gulp to be deployed inside a docker container.
I'm pretty happy with the following sequence:
- build using grunt (or gulp) outside container
- add ./dist folder to container
- run npm install (with --production flag) inside container
But in every example I find, I see a different approach:
- add ./src folder to container
- run npm install (with dev dependencies) inside container
- run bower install (if required) inside container
- run grunt (or gulp) inside container
IMO, the first approach generates a lighter and more efficient container, but all of the examples out there are using the second approach. Am I missing something?
-
santi almost 9 yearsI see your point, but that means that you must turn every container into a build environment, which finally gets deployed to production. It feels like too much overhead, doesn't it? What do you think about adding some kind of clean-up procedure after the build process? Would that be efficient docker-wise?
-
VonC almost 9 years@scarmuega I agree. I have edited my answer to address your concern.
-
Scotty over 8 yearsThat gliderlabs example is so advanced, i can't pick out the relevant parts related to the above question. I like the concept, but the example is so complicated to grasp.
-
Darryl about 8 yearswhy do you create an app.tar and make that a requirement of your docker run rather than building a deployable docker image that you can put into a docker repo? Creating a deployable docker image allows you to simply pull and run the image from wherever you need it and it's stable. Your container will also start within a second or so with this approach.
-
VonC about 8 years@Darryl I build the deployable image in the next step, from a small base image. If I were to distribute directly the image in which I built the app, I would include too much overhead with it (too many files there only for building)
-
Darryl about 8 yearswhat I see in a lot of Dockerfiles is the addition of cleanup tasks to remove whatever is unnecessary so that the image is light.
-
VonC about 8 years@Darryl I agree, but that (the cleaning process) is I want to avoid when building: gcc or other build tools install a lot of stuff and I don't want to have any side-effect due to some files I would have missed and not removed. I prefer starting from a clean image, and install there.