How to correctly link php-fpm and Nginx Docker containers?
Solution 1
Don't hardcode ip of containers in nginx config, docker link adds the hostname of the linked machine to the hosts file of the container and you should be able to ping by hostname.
EDIT: Docker 1.9 Networking no longer requires you to link containers, when multiple containers are connected to the same network, their hosts file will be updated so they can reach each other by hostname.
Every time a docker container spins up from an image (even stop/start-ing an existing container) the containers get new ip's assigned by the docker host. These ip's are not in the same subnet as your actual machines.
see docker linking docs (this is what compose uses in the background)
but more clearly explained in the docker-compose
docs on links & expose
links
links: - db - db:database - redis
An entry with the alias' name will be created in /etc/hosts inside containers for this service, e.g:
172.17.2.186 db 172.17.2.186 database 172.17.2.187 redis
expose
Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified.
and if you set up your project to get the ports + other credentials through environment variables, links automatically set a bunch of system variables:
To see what environment variables are available to a service, run
docker-compose run SERVICE env
.
name_PORT
Full URL, e.g. DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, e.g.
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container's IP address, e.g.
DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, e.g.
DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), e.g.
DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, e.g.
DB_1_NAME=/myapp_web_1/myapp_db_1
Solution 2
I know it is kind an old post, but I've had the same problem and couldn't understand why your code didn't work. After a LOT of tests I've found out why.
It seems like fpm receives the full path from nginx and tries to find the files in the fpm container, so it must be the exactly the same as server.root
in the nginx config, even if it doesn't exist in the nginx container.
To demonstrate:
docker-compose.yml
nginx:
build: .
ports:
- "80:80"
links:
- fpm
fpm:
image: php:fpm
ports:
- ":9000"
# seems like fpm receives the full path from nginx
# and tries to find the files in this dock, so it must
# be the same as nginx.root
volumes:
- ./:/complex/path/to/files/
/etc/nginx/conf.d/default.conf
server {
listen 80;
# this path MUST be exactly as docker-compose.fpm.volumes,
# even if it doesn't exist in this dock.
root /complex/path/to/files;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass fpm:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/
Solution 3
As pointed out before, the problem was that the files were not visible by the fpm container. However to share data among containers the recommended pattern is using data-only containers (as explained in this article).
Long story short: create a container that just holds your data, share it with a volume, and link this volume in your apps with volumes_from
.
Using compose (1.6.2 in my machine), the docker-compose.yml
file would read:
version: "2"
services:
nginx:
build:
context: .
dockerfile: nginx/Dockerfile
ports:
- "80:80"
links:
- fpm
volumes_from:
- data
fpm:
image: php:fpm
volumes_from:
- data
data:
build:
context: .
dockerfile: data/Dockerfile
volumes:
- /var/www/html
Note that data
publishes a volume that is linked to the nginx
and fpm
services. Then the Dockerfile
for the data service, that contains your source code:
FROM busybox
# content
ADD path/to/source /var/www/html
And the Dockerfile
for nginx, that just replaces the default config:
FROM nginx
# config
ADD config/default.conf /etc/nginx/conf.d
For the sake of completion, here's the config file required for the example to work:
server {
listen 0.0.0.0:80;
root /var/www/html;
location / {
index index.php index.html;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
which just tells nginx to use the shared volume as document root, and sets the right config for nginx to be able to communicate with the fpm container (i.e.: the right HOST:PORT
, which is fpm:9000
thanks to the hostnames defined by compose, and the SCRIPT_FILENAME
).
Solution 4
New Answer
Docker Compose has been updated. They now have a version 2 file format.
Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
They now support the networking feature of Docker which when run sets up a default network called myapp_default
From their documentation your file would look something like the below:
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
fpm:
image: phpfpm
nginx
image: nginx
As these containers are automatically added to the default myapp_default network they would be able to talk to each other. You would then have in the Nginx config:
fastcgi_pass fpm:9000;
Also as mentioned by @treeface in the comments remember to ensure PHP-FPM is listening on port 9000, this can be done by editing /etc/php5/fpm/pool.d/www.conf
where you will need listen = 9000
.
Old Answer
I have kept the below here for those using older version of Docker/Docker compose and would like the information.
I kept stumbling upon this question on google when trying to find an answer to this question but it was not quite what I was looking for due to the Q/A emphasis on docker-compose (which at the time of writing only has experimental support for docker networking features). So here is my take on what I have learnt.
Docker has recently deprecated its link feature in favour of its networks feature
Therefore using the Docker Networks feature you can link containers by following these steps. For full explanations on options read up on the docs linked previously.
First create your network
docker network create --driver bridge mynetwork
Next run your PHP-FPM container ensuring you open up port 9000 and assign to your new network (mynetwork
).
docker run -d -p 9000 --net mynetwork --name php-fpm php:fpm
The important bit here is the --name php-fpm
at the end of the command which is the name, we will need this later.
Next run your Nginx container again assign to the network you created.
docker run --net mynetwork --name nginx -d -p 80:80 nginx:latest
For the PHP and Nginx containers you can also add in --volumes-from
commands etc as required.
Now comes the Nginx configuration. Which should look something similar to this:
server {
listen 80;
server_name localhost;
root /path/to/my/webroot;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
Notice the fastcgi_pass php-fpm:9000;
in the location block. Thats saying contact container php-fpm
on port 9000
. When you add containers to a Docker bridge network they all automatically get a hosts file update which puts in their container name against their IP address. So when Nginx sees that it will know to contact the PHP-FPM container you named php-fpm
earlier and assigned to your mynetwork
Docker network.
You can add that Nginx config either during the build process of your Docker container or afterwards its up to you.
Solution 5
As previous answers have solved for, but should be stated very explicitly: the php code needs to live in the php-fpm container, while the static files need to live in the nginx container. For simplicity, most people have just attached all the code to both, as I have also done below. If the future, I will likely separate out these different parts of the code in my own projects as to minimize which containers have access to which parts.
Updated my example files below with this latest revelation (thank you @alkaline )
This seems to be the minimum setup for docker 2.0 forward (because things got a lot easier in docker 2.0)
docker-compose.yml:
version: '2'
services:
php:
container_name: test-php
image: php:fpm
volumes:
- ./code:/var/www/html/site
nginx:
container_name: test-nginx
image: nginx:latest
volumes:
- ./code:/var/www/html/site
- ./site.conf:/etc/nginx/conf.d/site.conf:ro
ports:
- 80:80
(UPDATED the docker-compose.yml above: For sites that have css, javascript, static files, etc, you will need those files accessible to the nginx container. While still having all the php code accessible to the fpm container. Again, because my base code is a messy mix of css, js, and php, this example just attaches all the code to both containers)
In the same folder:
site.conf:
server
{
listen 80;
server_name site.local.[YOUR URL].com;
root /var/www/html/site;
index index.php;
location /
{
try_files $uri =404;
}
location ~ \.php$ {
fastcgi_pass test-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
In folder code:
./code/index.php:
<?php
phpinfo();
and don't forget to update your hosts file:
127.0.0.1 site.local.[YOUR URL].com
and run your docker-compose up
$docker-compose up -d
and try the URL from your favorite browser
site.local.[YOUR URL].com/index.php
Comments
-
Victor Bocharsky almost 4 years
I am trying to link 2 separate containers:
The problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect. Here is the source code, which is in my repository. Here is the file
docker-compose.yml
:nginx: build: . ports: - "80:80" - "443:443" volumes: - ./:/var/www/test/ links: - fpm fpm: image: php:fpm ports: - "9000:9000"
and
Dockerfile
which I used to build a custom image based on the nginx one:FROM nginx # Change Nginx config here... RUN rm /etc/nginx/conf.d/default.conf ADD ./default.conf /etc/nginx/conf.d/
Lastly, here is my custom Nginx virtual host config:
server { listen 80; server_name localhost; root /var/www/test; error_log /var/log/nginx/localhost.error.log; access_log /var/log/nginx/localhost.access.log; location / { # try to serve file directly, fallback to app.php try_files $uri /index.php$is_args$args; } location ~ ^/.+\.php(/|$) { fastcgi_pass 192.168.59.103:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } }
Could anybody help me configure these containers correctly to execute php scripts?
P.S. I run containers via docker-composer like this:
docker-compose up
from the project root directory.
-
Vincent De Smet about 9 yearsyou also don't need to publish port 9000 on the host, ports are open between the linked docker containers, unless you want to troubleshoot the port directly from your host.
-
Victor Bocharsky about 9 yearsYeas, you're right, thanks. In my case I should use fastcgi_pass fpm:9000 instead of direct ip. I don't know that Docker add it to host automatically, my bad.
-
Victor Bocharsky about 9 yearsWhat about port, so better to use expose instead of ports? Or I could not use any of this ports and expose directives because linked containers will have access to this port?
-
Vincent De Smet about 9 yearssorry for late reply - I think you may need to use expose, sorry I can't check right now
-
Vincent De Smet about 9 yearshere is full reference of docker-compose links - docs.docker.com/compose/yml/#links I add this info to my answer
-
Victor Bocharsky almost 9 yearsYes, you are right! We must to share files with fpm and nginx
-
Victor Bocharsky almost 9 years
-
Alfred Huang over 8 yearsWELL DONE!!! That's exact the point! I set the nginx root to an alternative path other than
/var/www/html
with failure. -
shriek almost 8 yearsAlso, just a note that
:9000
is the port that's being used in the container not the one that's exposed to your host. Took me 2 hours to figure this out. Hopefully, you don't have to. -
Bernard almost 8 yearsYour nginx config file assumes that your website has only php files. It's a best practice to create an nginx location rule for static files (jpg,txt,svg, ...) and avoid the php interpreter. In that case both the nginx and php containers need access to the website files. @iKanor 's answer above takes care of that.
-
Phillip almost 8 yearsThanks @Alkaline , static files are problem with my original answer. In fact, nginx really needs, at a minimum, the css and js files to be local to that machine in order to work properly.
-
treeface almost 8 yearsAlso remember to make sure
php-fpm
is listening on port 9000. This would belisten = 9000
in/etc/php5/fpm/pool.d/www.conf
. -
DavidT almost 8 yearsThanks @treeface good point. I have updated with your comment.
-
Aftab Naveed over 7 yearsLooks like the Data does not get update from host to the containers, and when I do docker ps -a I see the data container stopped, is that an issue?
-
iKanor over 7 yearsThat's the expected behaviour. A data-only container does not run any command, and it will just be listed as stopped. Also, the
Dockerfile
of the data container is copying your sources to the container on build time. That is why they will not be updated if you change the files in the host. If you want to share the sources between the host and the container you need to mount the directory. Change thedata
service in the compose file to loadimage: busybox
, and in thevolumes
section enter./sources:/var/www/html
, where./sources
is the path to your sources in the host. -
030 about 7 years
services.fpm.ports is invalid: Invalid port ":9000", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
-
Seer almost 7 yearsYou don't actually need to include a
ports
section at all here. You may just need toexpose
it if it's not already in the image (which it probably is). If you're doing inter-container communication, you shouldn't be exposing the PHP-FPM port. -
cptPH almost 6 yearsWas looking for a solution for
AH01071: Got error 'Primary script unknown\n'
and that the php-fpm container has to share the same directory with the web nodes was the solution! -
therobyouknow almost 6 years
--links
are now obsolete, according to the docker documention that you reference. They are still currently supported but the apparent plan is for them to be obsoleted. -
therobyouknow almost 6 yearsalso as per EDIT in answer. And the docker 'expose' statement can be used to connect to a specific port on another container (when used with the hostname) as the question answerer advises.
-
Daniele Cruciani over 4 years@cptPH nothing to be shared actually, I do not know wich image is reporting that error, but fpm image must know about
/complex/path/to/files
, and that must be exactly the path where the php files lay on, including index.php, that is a fallback as defined indefault.conf
BUT, if you are providing anything else than php, then that path must exists in nginx too. (common, but I am using this for a pure rest service, so I do not need it) -
Simon Davies about 4 yearsthanks for this, old but still informative, I had to use /usr/share/nginx/html 👍, thanks
-
Ulrich Eckhardt about 4 yearsThere is one thing that should be unnecessary: Creating an own image for Nginx. Instead, create a single configuration file that you mount as configuration file into the container. Instead of the
include
for the single site, put that inline. Multiple sites from one webserver are not necessary in a container environment. -
simon about 2 yearsanother option could be using
fastcgi_param SCRIPT_FILENAME /var/www/html/$fastcgi_script_name;
, provided that/var/www/html/
is path inside php-fpm container.