How to expose a Docker network to the host machine?
Solution 1
You can accomplish this by running a dns proxy (like dnsmasq) in a container that is on the same network as the application. Then point your hosts dns at the container ip, and you'll be able to resolve hostnames as if you were in the container on the network.
https://github.com/hiroshi/docker-dns-proxy is one example of this.
Solution 2
If you need a quick workaround to access a container:
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
172.19.0.9
- If you need to use the container name, add it to your
/etc/hosts
.
# /etc/hosts
172.19.0.9 container_name
Solution 3
I am not sure if I understand you correctly. You want e.g. your redis server be accessible not only from containers that are in the same network, but also from outside the container using your host ip address?
To accomplish that you have to use the expose command as described here https://docs.docker.com/compose/compose-file/#/expose
expose:
- "6379"
So
ports:
- "6379:6379"
expose:
- "6379"
should do the trick.
The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number.
from https://docs.docker.com/engine/reference/builder/#expose
Related videos on Youtube
polvoazul
Updated on June 25, 2020Comments
-
polvoazul almost 4 years
Consider the following
docker-compose.yml
version: '2' services: serv1: build: . ports: - "8080:8080" links: - serv2 serv2: image: redis ports: - "6379:6379"
I am fowarding the ports to the host in order to manage my services, but the services can access each other simply using the default docker network. For example, a program running on
serv1
could accessredis:6379
and some DNS magic will make that work. I would like to add my host to this network so that i can access container's ports by their hostname:port.-
Nehal J Wani over 7 years
-
BMitch over 7 yearsAt present, Docker will tell you to keep using port binding. There's nothing to expose the internal DNS or linking to the docker host that I've seen. And the default firewall rules would block you from direct access from outside the host and force you to go through this port binding. Nehal's comment shows resolving docker hosts, and artworkad shows the port binding that you already do.
-
-
polvoazul over 7 yearsNot exactly, i would like to be able to access ANY port i like in the containers, without binding to the host's ports. For instance, if there is a web server running on 8080 on serv1 i would like to go on my host's browser and point to serv1:8080 instead of localhost:8080. This way I would not need to edit docker-compose file every time i need access to a new port
-
polvoazul over 7 yearsNote that this already happens between containers. In my serv1 container I can access redis daemon via serv2:6379. No need to expose ports in foresight or anything, they are on the same network, it just works. I would like this behaviour on the host.
-
DarkLeafyGreen over 7 years@polvoazul I understand. But I think you cannot mix the internal service discovery over DNS with exposure of services to the host (which is done via port binding). Also I doubt that it is good practice to expose just all ports. So to access services from the host I think there is not really a good way around port bindings (at least I am not aware of such way).
-
Jaroslav Záruba over 4 yearsFrom docker.com: [expose] - "Expose ports without publishing them to the host machine..."