Tiller: dial tcp 127.0.0.1:80: connect: connection refused

12,888

It means your server: entry in your .kube_config.yml is pointing to the wrong port (and perhaps even the wrong protocol, as normal kubernetes communication travels over https and is secured via mutual TLS authentication), or there is no longer a proxy that was listening on localhost:80, or perhaps the --insecure-port used to be 80 and is now 0 (as is strongly recommended)

Regrettably, without more specifics, no one can guess what the correct value was or should be changed to

Share:
12,888
gamechanger17
Author by

gamechanger17

Updated on June 27, 2022

Comments

  • gamechanger17
    gamechanger17 almost 2 years

    From the time I have upgraded the versions of my eks terraform script. I keep getting error after error.

    currently I am stuck on this error:

    Error: Get http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller: dial tcp 127.0.0.1:80: connect: connection refused

    Error: Get http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller: dial tcp 127.0.0.1:80: connect: connection refused

    The script is working fine and I can still use this with old version but I am trying to upgrade the cluster version .

    provider.tf

    provider "aws" {
      region  = "${var.region}"
      version = "~> 2.0"
    
      assume_role {
        role_arn = "arn:aws:iam::${var.target_account_id}:role/terraform"
      }
    }
    
    provider "kubernetes" {
      config_path = ".kube_config.yaml"
      version = "~> 1.9"
    }
    
    provider "helm" {
      service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
      namespace       = "${kubernetes_service_account.tiller.metadata.0.namespace}"
    
    
      kubernetes {
        config_path = ".kube_config.yaml"
      }
    }
    
    terraform {
      backend "s3" {
    
      }
    }
    
    data "terraform_remote_state" "state" {
      backend = "s3"
      config = {
        bucket         = "${var.backend_config_bucket}"
        region         = "${var.backend_config_bucket_region}"
        key            = "${var.name}/${var.backend_config_tfstate_file_key}" # var.name == CLIENT
        role_arn       = "${var.backend_config_role_arn}"
        skip_region_validation = true
        dynamodb_table = "terraform_locks"
        encrypt        = "true"
      }
    }
    

    kubernetes.tf

    resource "kubernetes_service_account" "tiller" {
      #depends_on = ["module.eks"]
    
      metadata {
        name      = "tiller"
        namespace = "kube-system"
      }
    
      automount_service_account_token = "true"
    }
    
    resource "kubernetes_cluster_role_binding" "tiller" {
      depends_on = ["module.eks"]
    
      metadata {
        name = "tiller"
      }
    
      role_ref {
        api_group = "rbac.authorization.k8s.io"
        kind      = "ClusterRole"
        name      = "cluster-admin"
      }
    
      subject {
        kind = "ServiceAccount"
        name = "tiller"
    
        api_group = ""
        namespace = "kube-system"
      }
    }
    

    terraform version: 0.12.12 eks module version: 6.0.2

  • gamechanger17
    gamechanger17 over 4 years
    It was working fine with old version. Could you let me know which script shall I post to get the error. I check the github issues as well. nothing is working for me.
  • mdaniel
    mdaniel over 4 years
    It's not exactly a script we need, it's more contextual information: where did that .kube_config.yml come from? can anyone access your kubernetes cluster at all? is that terraform command being run on a master node? are you the only one who would be making changes to the system? the list of questions is almost endless, because we are not on your machine in order to know what changed