Why should you do port mapping for docker containers?

6,102

Solution 1

TL;DR

There are a number of use cases for port mapping, but for DevOps at scale the primary reason is generally to enable mapping well-known service ports to available ports on the host. This matters when you're running large numbers of containers that use the same port by default, and you don't want to manually assign or track alternative port numbers.

A Very Short Port Primer

As a general rule, a port can only map to a single service or process on each host (multiplexing ports and multi-port services are something of an exception). Only 65,536 ports are available for services to bind, with the lowest 1,024 generally reserved for binding by the root user. Services also typically bind to well-known ports such as 22, 53, or 5432 to make the service easy to find. All of these issues matter, but it's often the last issue that typically concerns Docker hosts most.

Mapping Container Ports

Imagine that you have multiple PostgreSQL containers on a single host. By default, each wants to bind to port 5432 as its default. While you could certainly modify each container or run-command to bind the container's service to a unique host port, this quickly becomes a hassle at scale.

Instead, Docker and other container managers make it easy to map ports between the host OS and the container. For example:

# launch three PostgreSQL instances
for i in {1..3}; do
    docker run --rm -d -P postgres:alpine
done

# show port mappings for each container
docker container ls -q --filter="ancestor=postgres:alpine" |
    xargs -n1 docker port
5432/tcp -> 0.0.0.0:32773
5432/tcp -> 0.0.0.0:32772
5432/tcp -> 0.0.0.0:32771

This shows that you have three instances of PostgreSQL, all happily listening on the default port of 5432 inside their containers. However, each instance is listening on a different port (32771, 32772, and 32773) on the Docker host!

At scale, you would typically use DNS, autodiscovery, linking, or container networking to help clients and applications find the right PostgreSQL instance to connect with. With just a few instances running, parsing docker ps may be sufficient for your needs. Your specific use case may vary.

Solution 2

When you instantiate a new container it has its own networking, isolated from the host. So you can NOT access the container directly from its port.

ports: - 8080:80

this command will specify that you are fowarding your host port to the container port. In your local host in the port 8080 will call the port 80 of your container.

There are some ways to do it, for example using traefik to execute the reverse proxy inside the containers, you can address all containers to a network, exposing only the port of traefik, so you can access all containers executing the router rule that u have specified.

The simple answer is:

'Cause it's the easiest way to access a service from a container.

Share:
6,102

Related videos on Youtube

Kwstas
Author by

Kwstas

Updated on September 18, 2022

Comments

  • Kwstas
    Kwstas over 1 year

    I have been using docker for a few months and I am just a developer not a DevOps or a networking person. However, I came across a docker-compose which maps an external port to an internal port. Something like in this compose file where it says:

    ports:
      - 8080:80
    

    My question is WHY? Why do we need to map an external port to internal port. Some of the explanation that I came across was that if you want to not expose a particular port to the user but use it internally. That also begs the same question: what would be a real life example of someone wanting to do this and why?