Kubernetes - How to edit CoreDNS corefile configmap?
Solution 1
It looks like your Corefile got somehow corrupted during editing through "kubectl edit ..." command. Probably it's fault of your default text editor, but it's definitely valid.
I would recommend you to replace your current config map with the following command:
kubectl get -n kube-system cm/coredns --export -o yaml | kubectl replace -n kube-system -f coredns_cm.yaml
#coredns_cm.yaml
apiVersion: v1
data:
Corefile: |
cluster.local:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: coredns
Solution 2
$ kubectl -n kube-system edit configmaps coredns -o yaml
Then use vi
to edit and save the coredns
configmap. Once it is saved the change will be applied.
Related videos on Youtube
A. Davidson
Updated on June 04, 2022Comments
-
A. Davidson almost 2 years
I have a pretty standard installation of Kubernetes running as a single-node cluster on Ubuntu. I am trying to configure CoreDNS to resolve all internal services within my Kubernetes cluster and SOME external domain names. So far, I have just been experimenting. I started by creating a busybox pod as seen here: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
Everything works as described in the guide until I make changes to the corefile. I am seeing a couple of issues:
- I edited the default corefile using
kubectl -n kube-system edit configmap coredns
and replaced.:53
withcluster.local:53
. After waiting, things look promising.google.com
resolution began failing, whilekubernetes.default.svc.cluster.local
continued to succeed. However,kubernetes.default
resolution began failing too. Why is that? There is still a search entry forsvc.cluster.local
in the busybody pod’s/etc/resolv.conf
. All that changed was the corefile. -
I tried to add an additional stanza/block to the corefile (again, by editing the config map). I added a simple block :
.:53{ log }
It seems that the corefile fails to compile or something. The pods seem healthy and don’t report any errors to the logs, but the requests all hang and fail.
I have tried to add the log plugin, but this isn’t working since the plugin is only applied to domains matching the plugin, and either the domain name doesn’t match or the corefile is broken.
For transparency, this is my new corefile :
cluster.local:53 { errors log health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance }
- I edited the default corefile using
-
Aaron Hoffman over 3 yearsFYI - in the example above, you may have to use
forward
instead ofproxy
kubernetes.io/docs/tasks/administer-cluster/…