access host's ssh tunnel from docker container

72,182

Solution 1

I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container

Solution 2

Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.

In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).

Setting up the tunnel:

For this to work, retrieve the ip your docker0 bridge is using via:

ifconfig

you will see something like this:

docker0   Link encap:Ethernet  HWaddr 03:41:4a:26:b7:31  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0

Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via

ssh -L 172.17.0.1:9000:host-ip:9999

Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.

Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.

Setting up your application:

In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)

For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.

Forwarding multiple connections:

When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.

Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user@]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.

Solution 3

on MacOS (tested in v19.03.2),

1) create a tunnel on host

ssh -i key.pem username@jump_server -L 3336:mysql_host:3306 -N

2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.

example,

mysql -h host.docker.internal -P 3336 -u admin -p

note from docker-for-mac official doc

I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST

The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.

The gateway is also reachable as gateway.docker.internal.

Solution 4

I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.

I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.

Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.

After configuring a host inside my ~/.ssh/config:

Host project-postgres-tunnel
    HostName remote.server.host
    User sshuser
    Port 2200
    ForwardAgent yes
    TCPKeepAlive yes
    ConnectTimeout 5
    ServerAliveCountMax 10
    ServerAliveInterval 15

And adding a service to the stack:

  postgres:
    image: cagataygurturk/docker-ssh-tunnel:0.0.1
    volumes:
      - $HOME/.ssh:/root/ssh:ro
    environment:
      TUNNEL_HOST: project-postgres-tunnel
      REMOTE_HOST: localhost
      LOCAL_PORT: 5432
      REMOTE_PORT: 5432
    # uncomment if you wish to access the tunnel on the host
    #ports:
    #  - 5432:5432

The PHP container started talking through the tunnel without any problems:

postgresql://user:password@postgres/db?serverVersion=11&charset=utf8

Just remember to put your public key inside that host if you haven't already:

ssh-copy-id project-postgres-tunnel

I'm pretty sure this will work regardless of the OS used (MacOS / Linux).

Solution 5

My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.

Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked. This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):

-R [bind_address:]port:host:hostport

However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.

In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.

Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.

Share:
72,182

Related videos on Youtube

npit
Author by

npit

Updated on January 01, 2022

Comments

  • npit
    npit over 2 years

    Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.

    I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.

    I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried nc -luv -p 9999 # at host nc -luv -p 9000 # at container

    following this, parag. 2 but there was no perceived communication, even when doing nc -luv host-ip -p 9000 at the container

    I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).

    So my questions are

    1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?

    2 - What's a quick way to test that the connection is up? Via bash, preferably.

    Thanks.

    • Affes Salem
      Affes Salem over 2 years
      host.docker.internal is what you are looking for right?
  • npit
    npit almost 8 years
    Thanks. I can connect to the service now. Is there a quick way to check that the connection is indeed up, tough?
  • Alejandro Vales
    Alejandro Vales over 7 years
    you can actually use curl curl {ip}:{port}/randomendpoint or wget {ip}:{port}/randomendpoint
  • hlobit
    hlobit almost 5 years
    This one should be the accepted answer. --net=host has indeed unwanted side effects...
  • Davos
    Davos almost 5 years
    Sadly there is no docker0 on MacOS docs.docker.com/docker-for-mac/networking/…
  • Felix K.
    Felix K. almost 5 years
    Hi @Davos, I am not familiar with docker on mac, but it should also be possible to choose a different available local network interface address other than the docker0 address. I just recommended this in my answer for convenience reasons. So maybe try to bind the ssh tunnel to your current eth0 address and in your docker application then use this address as destination, docker should automatically act as gateway and do the routing.
  • Felix K.
    Felix K. almost 5 years
    The page you referenced also has a section "I want to connect from a container to a service on the host" which mentions the special DNS name "host.docker.internal" to resolve the host IP (mentioning also that it will not work in production). Some guys also pass the local IP via environment variables. Maybe you find other answers on SO that cover this topic in greater detail.
  • Davos
    Davos almost 5 years
    Thanks for the follow up comment, I did find the magic DNS name and can confirm it worked. My use case was for something I only wanted to run locally, and it needed to connect via ssh tunnel to a remote data service. Running the tunnel within the container wasn't an option, the bridge network wasn't an option, so that DNS was the only way. I also could have just installed it on my host but less appealing. It's funny, the main value I get from docker is emulating production linux environments. And then the host's limitations go and leak through the hypervisor.
  • Shawn
    Shawn over 4 years
    Wow this is so cool! Confirmed working on Docker 18.09.2 as well. Didn't know you can do that. I assume this will allow you to access any port your laptop's localhost, which really opens the door to a lot of resources.
  • Brandon
    Brandon over 4 years
    This is awesome! I've been chasing my tail on this for THREE DAYS! Thank you! Binding the ssh tunnel to 0.0.0.0 fixed it. It was originally omitted, likely defaulting to 127.0.0.1.
  • silentsurfer
    silentsurfer over 4 years
    Could you please explain why the remote host address needs to be specified twice, once in the config file and once as environment variable?
  • emix
    emix over 4 years
    I'm not sure what is where, according to you, specified twice.
  • MatrixManAtYrService
    MatrixManAtYrService about 4 years
    I don't think this will work in Mac OS, since there the docker daemon is actually on a vm: forums.docker.com/t/should-docker-run-net-host-work/14215/26
  • ahmed khalil jerbi
    ahmed khalil jerbi about 4 years
    @MatrixManAtYrService is there a solution for Docker on Mac OS?
  • MatrixManAtYrService
    MatrixManAtYrService about 4 years
    So far as I know, not one that will work easily in all cases. For me, I just switched to from connecting to the host tunnel to having the container OS set up the tunnel itself. You could also try to figure out the networking between the VM and MacOS.
  • Snowball
    Snowball about 4 years
    Same as @Brandon I feel a strong random stranger on the internet love towards you. Thanks so much for your answer!!
  • nik
    nik almost 4 years
    For Mac users this really is a life saver, since neither docker0 nor --net=host are supported. Thanks!
  • Felix K.
    Felix K. over 3 years
    Thanks guys for the warm feedback, it's clearly a tricky subject but I'm glad I could provide some support here :D
  • jfunk
    jfunk almost 3 years
    On the host system ssh -L 9090:0.0.0.0:9090 (listens to all IPs for port 9090) - then container config, port: 9090:9090 - then if you are on Docker for Mac you will need to generate a request to host.docker.internal:9090 from within the container -- that request should then be forwarded -- Thanks @B12Toaster and also @Brandon for the clue about 0.0.0.0