4 minutes
Docker: Container to container communication via host
Most services running on my server are just private projects to learn new tools, technologies, etc.. Hence, I mostly install updates without prior checks if everything else is still running. I ran into problems in the past, but also learned a lot.
After pushing Rust code I wrote for an Exercism.io exercise to my private GitLab instance, its pipeline failed.
I’m using GitLab’s CI/CD feature to build my software, run unit tests, linters etc..
Gitlab Runner is the tool that gets triggered to run the tasks defined in .gitlab-ci.yml
of the repository.
Since I’m mostly reusing a working pipeline definition from previous exercises, I triggered an old pipeline.
Without any changes to the repository this pipeline failed as well.
So it seemed to be some update (software, configuration, …) that caused the failing pipelines.
Looking at the pipeline’s output, I saw that the command to clone my git repository failed. The problem was, that git running in Docker was not able to establish a connection to my GitLab instance because of a timeout:
|
|
I configured GitLab Runner to build in Docker and for this specific repository I’m using the latest Rust image.
My first step was to run the same image manually and try to curl
the address of my GitLab instance which also failed.
|
|
But surprisingly for me pinging the IP address worked.
I tried different other subdomains that are pointing to different containers running on my server all failed.
However, curl
ing other domains pointing not to my server was successful.
So the error is of course somewhere on my side.
More specifically somewhere in my Docker universe, because it seemed that only Docker-related services were not answering.
I’m still not 100% sure about the reason and when/how it was introduced. But what I do know, is the following:
- Applications running in Docker containers are reachable from the internet.
- They are also reachable via their internal DNS names, in my case assigned by
docker-compose
. - Docker containers can reach services outside my host via
curl
. - My containers are running in a separate Docker network.
- Containers cannot reach other services running in different containers using the host’s IP address.
- But they can reach those services when using the Docker-internal DNS names.
I found this stackoverflow question which is pretty much exactly my problem.
Solution 1
After a lot of trying out, I read about the argument net=host
for Docker.
In short, a Docker container started with this argument is not located within the Docker network, but in the host’s network.
So it looks like the application running in Docker is running on the host.
See Docker documentation or this blog post for detailed information.
After starting my temporary rust:latest
Docker container and adding the net=host
argument, I was able to curl
other services running in containers on the same host.
Hence, git clone
ing a repository from my GitLab instance was also working.
I extended my GitLab runner configuration (/etc/gitlab-runner/config.toml
):
|
|
Solution 2 (preferred)
My preferred solution is to have all containers that talk to each other running in the same Docker network.
Due to Docker assigning DNS names to containers, it is possible to use those names instead of the host’s DNS name/IP address.
So I also adjusted the configuration of GitLab Runner in /etc/gitlab-runner/config.toml
:
|
|
I have to use HTTP
instead of HTTPS
, because of my setup using Traefik as a reverse proxy, see this post.
The specified network needs to be the same as defined for GitLab’s container network.