Add host mapping to /etc/hosts in Kubernetes
Solution 1
To add a hostname to the hosts file in a "semi" dynamic fashion, one can use the postStart hook:
spec:
containers:
- name: somecontainer
image: someimage
lifecycle:
postStart:
exec:
command:
- "cat"
- "someip"
- "somedomain"
- ">"
- "/etc/hosts"
A better way would be however to use an abstract name representing the service in multiple stages. For example instead of using database01.production.company.com use database01 and setup the environment such that this resolves to production in the production setting and staging in the staging setting.
Lastly it is also possible to edit the kubedns settings such that the kubernetes DNS can be used to retrieve external DNS names. Then you would just use whatever name you need in the code, and it just "automagically" works. See for example https://github.com/kubernetes/kubernetes/issues/23474 on how to set this up (varies a bit from version to version of skydns: Some older ones really do not work with this, so upgrade to at least kube 1.3 to make this work properly)
Solution 2
Create a file on the host system(or a secret) with all the extra hosts you need (e.g. /tmp/extra-hosts)
Then in K8S manifest:
spec:
containers:
- name: haproxy
image: haproxy
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cat /hosts >> /etc/hosts"]
volumeMounts:
- name: haproxy-hosts
mountPath: /hosts
volumes:
- name: haproxy-hosts
hostPath:
path: /tmp/extra-hosts
Solution 3
From kubernetes.io/docs: "In addition to the default boilerplate, we can add additional entries to the hosts file to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, we can by adding HostAliases to the Pod under .spec.hostAliases:"
Also you can "Configure stub-domain and upstream DNS servers".
Solution 4
It is now possible to add a hostAliases
section directly in the description of the deployment.
As a full example of how to use the hostAliases
section I have included the surrounding code for an example deployment as well.
apiVersion : apps/v1
kind: Deployment
metadata:
name: "backend-cluster"
spec:
replicas: 1
selector:
matchLabels:
app: "backend"
template:
metadata:
labels:
app: "backend"
spec:
containers:
- name: "backend"
image: "exampleregistry.azurecr.io/backend"
ports:
- containerPort: 80
hostAliases:
- hostnames:
- "www.example.com"
ip: "10.0.2.4"
The important part is only a part of the file and here it is omitted for clarity:
...
hostAliases:
- hostnames:
- "www.example.com"
ip: "10.0.2.4"
Solution 5
++ Found this article to add /etc/hosts entries in a pod:
Adding entries to Pod /etc/hosts with HostAliases: service/networking/hostaliases-pod.yaml
In addition to the default boilerplate, you can add additional entries to the hosts file. For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
qingdaojunzuo
Testing for first 3 years and Java engineer for other 5 years.
Updated on January 18, 2022Comments
-
qingdaojunzuo over 2 years
I have an issue with the DNS mapping in kubernetes.
We have some servers which can be accessed from internet. The global DNS translates these servers's domain names to public internet IPs. Some services can't access through public IPs for security consideration.
From company internal, we add the DNS mappings with private IPs to /etc/hosts inside docker containers managed by kubernetes to access these servers manually.
I know that docker supports command --add-host to change
/etc/hosts
when executingdocker run
. I'm not sure if this command supported in latest kubernetes, such as kuber1.4
or1.5
?On the other hand, we can wrap the startup script for the docker container,
- append the mappings to
/etc/hosts
firstly - start our application
I only want to change the file once after first run in each container. Is there an easy way to do this because the mapping relations maybe different between develop and production environments or any commands related to this provided by kubernetes itself?
-
eikooc about 3 yearsIt is now possible to add this directly in the deployment config.
- append the mappings to
-
qingdaojunzuo over 7 yearsThanks for response. This is useful for me in some cases. We can use service name instead of domain names/IPs inside kubernetes cluster. Now we have a service A inside cluster. A uses kafka & zookeeper. Kafka & zookeeper deploy outside kube cluster. Service A gets all the kafka brokers from zookeeper with domain names. These domain names cannot be changed to kube service name because they are shared with other system outside kube cluster. These domain names need to be translated as private IPs using by service A. Is there any way to resolve this easily? Really appreciate your help.
-
qingdaojunzuo over 7 yearsReally thanks for the update. It's useful. I just read the doc kubernetes.io/docs/user-guide/container-environment and tried this. I think the command needs to be a little changed to
exec: command: - "sh" - "-c" - "echo someip somedomain > /etc/hosts"
. It seems that kubernetes just usesos/exec exec.Command(args).Run()
to run the command so that "|,>,>>" are not supported. Appreciate your help :) -
Norbert van Nobelen about 7 years@qingdaojunzuo Yes, you are correct: The command syntax in all of these command parts is far from user friendly, guessing them on the spot without testing is a recipe for mistakes :)
-
Girish almost 2 yearsSo here we don't need, args: - "/etc/hosts"