Rails server is still running in a new opened docker container
Solution 1
You are using an onbuild image, so your working direcotry is mounted in the container image. This is very good for developing, since your app is updated in realtime when you edit your code, and your host system gets updated for example when you run a migration.
This also means that your host system tmp directory will be written with the pid file every time a server is running and will remain there if the server is not shut down correctly.
Just run this command from your host system:
sudo rm tmp/pids/server.pid
This can be a real pain when you are for example using foreman under docker-compose, since just pressing ctrl+c will not remove the pid file.
Solution 2
I was stumped by the same problem for a bit until I figured out what was really going on. The real answer is further down... At first, you may want to try something like the following command in your docker-compose.yml file:
command: /bin/sh -c "rm -f /rails/tmp/pids/server.pid && rails server puma"
(I'm using Alpine and busybox to keep things tiny, so no bash! But the -c will work with bash too) This will delete the file if it exists so that you don't get stuck with the problem that the container keeps exiting and sitting there unable to run commands.
Unfortunately, that is NOT a good solution because you add an extra layer of /bin/sh ahead of the server and that prevents the server from getting the stop command. The result is that the docker stop commands won't give a graceful exit and the problem will always happen.
You could just run the rm command using docker-compose up to remove the file and then change the command back to the server and carry on.
However, the real answer is the create a simple docker-entry.sh file and make sure you call it using the exec form Entrypoint Documentation so that signals (like stop) get to the server process.
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
exec bundle exec "$@"
NOTE: we use exec on the last line to make sure that rails will be run as pid 1 (i.e. no extra shell) and thus get the signals to stop. And then in the Dockerfile (or the compose.yml) file add the entrypoint and command
# Get stuff running
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["rails", "server", "puma"]
And again, you MUST use the [] format so that it is exec'd instead of run as sh -c ""
Solution 3
This is an adapted version of Brendon Whateleys ๐ answer for
Docker Compose Solution
1. Create docker-entrypoint.sh
#!/bin/bash
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
exec bundle exec "$@"
2. Adapt your docker-compose.yml
services:
web:
build: .
entrypoint: /myapp/docker-entrypoint.sh
command: ["rails", "server", "-b", "0.0.0.0"]
volumes:
- .:/myapp
ports:
- "3000:3000"
Notice that you will have to provide the path to where you mounted your app, iE: /myapp
2.5 If you encounter a permission error
Run this in your terminal before running docker-compose up
or building your image. Thanks sajadghawami.
chmod +x docker-entrypoint.sh
3. Enjoy never having to remove server.pid manually again
๐
Solution 4
for my work:
docker-compose run web bash
and then go folder by folder with the command cd (i am using win 7 toolbox) so in the end i use in the bash :
rm tmp/pids/server.pid
Solution 5
What I have done, Is to go on the bash shell of docker:
docker-compose run web /bin/bash
then remove the following file
rm tmp/pids/server.pid
Hope that it help!
Comments
-
handkock almost 2 years
I want to deploy my rails project using Docker. So I use Docker-Compose. But I get one weird error message. When run docker-compose up(this contains db-container with postgresql, redis and web container with rails) I get a
web_1 | => Booting Puma web_1 | => Rails 4.2.4 application starting in production on http://0.0.0.0:3000 web_1 | => Run
rails server -hfor more startup options web_1 | => Ctrl-C to shutdown server web_1 | A server is already running. Check /usr/src/app/tmp/pids/server.pid. web_1 | Exiting
So I cannot understand why do I get this message, because every time I run docker-compose up, new containers start, not the previous. Even if want to delete theseserver.pid
I am not able to do that, because this container isn't running.my docker-compose.yml file
web: dockerfile: Dockerfile-rails build: . command: bundle exec rails s -p 3000 -b '0.0.0.0' ports: - "80:3000" links: - redis - db environment: - REDISTOGO_URL=redis://user@redis:6379/ redis: image: redis db: dockerfile: Dockerfile-db build: . env_file: .env_db
Dockerfile-rails
FROM rails:onbuild ENV RAILS_ENV=production
I don't think I need to post all my Dockerfiles
UPD: I fixed it by myself: i just deleted all my containers and ran the
docker-compose up
once again -
handkock over 8 yearsThe problem is: there's no
server.pid
file in mytmp/pids
directory -
TopperH over 8 yearsI think a workaround could be to run "docker-compose run web /bin/bash" and remove the pid file from there. Then try to investigate how it got there on the first place. Also you should run "docker inspect" on your container since it's not probably mounting your host directory correctly.
-
Joao Costa over 7 yearsGreat answer. I had to set my entrypoint to "/myapp/docker-entrypoint.sh" to get this to work. Related discussion at github.com/docker/compose/issues/1393 .
-
Ben Morris about 7 yearsA variation of this worked for me, where I added to my startup script
rm /myapp/tmp/pids/server.pid
before running foreman start. -
Dan Kohn almost 7 yearsFantastic answer, thank you. Note that I had to
chmod +x docker-entrypoint.sh
to resolve a permission denied error. -
Seafish over 6 yearsCould you give an example of the entrypoint/command for docker-compose?
-
Natus Drew almost 6 yearsThis is the preferred solution
-
Steven Aguilar almost 6 yearsI had to go into the container
docker-compose run web /bin/bash
and then remove the pid. -
TopperH almost 6 years@StevenAguilar this actually means you are not using an onbuild image as the OP was. In your case it would have actually been a better practice to just
docker-compose rm web
, since there is nothing persistent in that container -
Steven Aguilar almost 6 years@TopperH that didn't work, I still kept the error. When I went back to the server.pid I got
Unable to find image 'web:latest' locally
since I had removed it. -
TopperH almost 6 years@StevenAguilar are you running your rails server using
docker-compose up
? -
Steven Aguilar almost 6 years@TopperH yes, I'm using
docker-compose up
to start the rails server -
mirageglobe over 5 yearsactually i would do "rm -f /rails/tmp/pids/server.pid; rails s " rather than && as it will return exit 1 if the file does not exist.
-
Brendon Whateley over 5 yearsAs I mentioned at the beginning, the solution using
&&
is a hack. If that is what you want, you can replace the&&
with a;
. But a better solution is the modify the if statement in the suggesteddocker-entrypoint.sh
. Consider however that either solution will make it impossible to start the container if it was shut down cleanly, since that will remove the PID file resulting in an error you desire. -
davegson over 5 yearsAs @dankohn suggested, run
chmod +x docker-entrypoint.sh
, but do this locally, before building the image! I wasted an hour or more trying to run this command in my Dockerfile - which failed hard ๐คจ -
davegson over 5 years@Seafish, I added an adapted answer to solve this issue when using docker-compose
-
Brendon Whateley over 5 yearsThis looks like it is for the case where the PID file is outside the container, which wasn't the case in the original question. Keeping the PID outside the container doesn't make any sense because the PID is only unique inside a container, meaning it can't reliably do anything across containers.
-
Artur INTECH over 3 yearsDocker guide states the same docs.docker.com/compose/rails.