How to make a symlinked folder appear as a normal folder
Solution 1
I don't have much experience with docker
so I can't promise this will work but one choice would be to mount the directory instead of linking to it:
$ cd projects/app1
$ mkdir shared
$ sudo mount -o bind ../shared shared/
That will attach ../shared
to ./shared
and should be completely transparent to the system. As explained in man mount
:
The bind mounts.
Since Linux 2.4.0 it is possible to remount part of the file hierarchy somewhere else. The call is:
mount --bind olddir newdir
or by using this fstab entry:
/olddir /newdir none bind
After this call the same contents are accessible in two places.
Solution 2
This issue has come up repeatedly in the Docker community. It basically violates the requirement that a Dockerfile
be repeatable if you run it or I run it. So I wouldn't expect this ability, as described in this ticket: Dockerfile ADD command does not follow symlinks on host #1676.
So you have to conceive of a different approach. If you look at this issue: ADD to support symlinks in the argument #6094, a friend of ours from U&L (@Patrick aka. phemmer) provides a clever workaround.
$ tar -czh . | docker build -
This tells tar
to dereference the symbolic links from the current directory, and then pipe them all to the docker build -
command.
-c, --create
create a new archive
-h, --dereference
follow symlinks; archive and dump the files they point to
-z, --gzip, --gunzip --ungzip
Related videos on Youtube
![zoechi](https://i.stack.imgur.com/IlnZm.jpg?s=256&g=1)
zoechi
Google+ Profil: https://twitter.com/gzoechi I'm an enthusiastic and experienced software engineer sortware architect developer consultant looking for clients/projects I focus on custom line of business applications with Dart (mobile, browser, server) I have also a lot of experience with other technologies Angular TypeScript CSS .NET/C# , Java, Dynamics AX ERP databases OSX, Linux, Windows ...
Updated on September 18, 2022Comments
-
zoechi almost 2 years
I have two Dart applications I need to dockerize. These two apps use a shared source directory.
Because Docker prevents adding files from folders outside the context directory (project/app1
) I can't add files from../shared
nor fromshared
(the symlink insideprojects/app1
).I'm looking for a way to trick Docker to do it anyway.
My simplified project structure
- projects - app1 - Dockerfile - shared (symlink ../shared) - otherSource - app2 - Dockerfile - shared (symlink ../shared) - otherSource - shared - source
I could move
Dockerfile
one level up and rundocker build
from there but then I need two Dockerfiles (for app1 and app2) in the same directory.My current idea was, if I could somehow hide the fact that
projects/app1/shared
is a symlink this problem would be solved. I checked if I can shareprojects
using Samba and remount it somewhere else and configure Samba to treat symlinks like normal folders but haven't found whether this is supported (I have not much experience with Samba and not tried it yet, just searched a bit).Is there any other tool or trick that would allow that?
I would rather not change the directory structure because this would cause other troubles and also rather not copy files around.
-
terdon over 9 years@zoechi this is perfectly on topic on both sites. As a general rule, I would post more technical questions like this on U&L and more user-space questions here. The choice is completely up to you though. On the one hand, there are more users here so more eyeballs, on the other, there is a much higher concentration of professional *nix people on U&L. Just make sure you don't post the same question on both sites. If you want to move it, either delete this or flag for mod attention and ask them to migrate.
-
Bruno Bronosky over 9 yearsThis is an EXCELLENT solution! I understand why Docker claims they want to omit this feature. However, there is a considerable difference in the workflow I use while developing my containerize project and how I expect it to be built for production. On my local machine I want a super tight feedback loop. My app has 1 git repo and the build environment for the containers has a 2nd repo. I need to be able to make edits and build tests locally before I can decide if I want to commit and push. I won't have symlinks or ADD instructions in my final project.
-
dim over 8 yearsIt was mandatory for me to restart docker daemon! Else the mounted dir was not visible in the container.
-
csch over 7 years@dim yes! I tried to get it to work with Capistrano and it didn't work – turns out I mounted the shared directories after I started the container
-
Jason over 7 yearsDockerfiles are not repeatable. Dockerfiles can not possibly be made repeatable, because they almost all have apt-get or something equivalent at the 2nd or 3rd layer and apt-get is not repeatable. Tying the Docker development strategy to a misguided attempt to make the impossible true will just saddle Docker with a series of bad abstractions that help nobody. nathanleclaire.com/blog/2014/09/29/…
-
Jason over 7 yearsUnfortunately this won't work for Windows or OS X users. The debate about this issue has been... lively.
-
Alexander Mills almost 7 yearsFor the newbs, can you please explain in English what is going on here, linking to the tar man page is nice but
-
Alexander Mills almost 7 yearsOk, so I don't really see why this is any better than a
cp
command, can you explain why it's better? I also think the pipe is confusing/overly convoluted. Why not just put the tar command above the build command. I guess because then you would overwrite the symlinked dir with the real dir. -
slm almost 7 years@AlexanderMills - you don't want to copy the links in, you need the actual files they're linking, to hence the way I showed. Think about this bit: where are the links going to point to inside a docker container that doesn't have the actual files the links are pointing to?"
-
Alexander Mills almost 7 yearsno I get that part - to repeat myself (a) I don't see why this is better than a
cp
command, and part (b) I already think I answered myself - you need the pipe otherwise you will overwrite the symlink dir with the actual dir data. In any case, I think a better solution than this is to use eithermount
(to mount the parent dir to a local dir) or to copy a temp Dockerfile to the parent dir, and then delete the temp Dockerfile when you're done. -
slm almost 7 years@AlexanderMills - best way to see what happens is to try it and see the difference. Also the above is a building of a container not a running, so there's no mounting. stackoverflow.com/questions/37328370/…. I highly suggest you try all these things out, it'll make much more sense.
-
user5359531 over 6 yearsI just added
/bin/cp ../requirements.txt . && docker build ...
to a Makefile for building the Docker, it was easier -
Adam Pietrasiak over 5 yearsis mounting 'commited' to source control eg github? or would I have to do it every time?
-
terdon over 5 years@pie6k how could it be committed? Source control tracks changes in text files, not commands run on the system.
-
Jonas D. about 4 yearsTo work with everything rush (and possible other monorepo handlers) does, you might have to also include ` --hard-dereference`, as it otherwise doesn't include half of the references (up until seeing that option in the tar man I thought hardlinks were essentially just "files"... apparently not entirely)
-
Karl Forner about 4 yearsThis is SO useful !!
-
cglacet over 3 years@AlexanderMills the problem with copying files is that you'll have to remove them after (because these are probably garbage file that have nothing to do in your current repo). In other words avoiding side effect is very important.
-
Venryx almost 3 yearsCan someone, anyone link to a full example that uses this? I tried running
tar -czh . | docker build -
in place of my regulardocker build -t my-image-name .
, but I get the error (on Windows):tar: Failed to clean up compressor failed to get console mode for stdin: The handle is invalid. [...] failed to solve with frontend dockerfile.v0: failed to create LLB definition: the Dockerfile cannot be empty