How to Add Users to Kubernetes (kubectl)?

82,414

Solution 1

For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization

For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).

If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks

If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.

kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.

for Dex there are a few open source cli clients as follows:

If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts - with 2 options for specialised Policies to control access. (see below)

NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup

EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.

Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):

EDIT: Here is a bash script to automate Service Account creation - see below steps

  1. Create service account for user Alice

    kubectl create sa alice
    
  2. Get related secret

    secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
    
  3. Get ca.crt from secret (using OSX base64 with -D flag for decode)

    kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
    
  4. Get service account token from secret

    user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
    
  5. Get information from your kubectl config (current-context, server..)

    # get current context
    c=$(kubectl config current-context)
    
    # get cluster name of context
    name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1)
    
    # get endpoint of current context 
    endpoint=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$name\")].cluster.server}")
    
  6. On a fresh machine, follow these steps (given the ca.cert and $endpoint information retrieved above:

    1. Install kubectl

       brew install kubectl
      
    2. Set cluster (run in directory where ca.crt is stored)

       kubectl config set-cluster cluster-staging \
         --embed-certs=true \
         --server=$endpoint \
         --certificate-authority=./ca.crt
      
    3. Set user credentials

       kubectl config set-credentials alice-staging --token=$user_token
      
    4. Define the combination of alice user with the staging cluster

       kubectl config set-context alice-staging \
         --cluster=cluster-staging \
         --user=alice-staging \
         --namespace=alice
      
    5. Switch current-context to alice-staging for the user

       kubectl config use-context alice-staging
      

To control user access with policies (using ABAC), you need to create a policy file (for example):

{
  "apiVersion": "abac.authorization.kubernetes.io/v1beta1",
  "kind": "Policy",
  "spec": {
    "user": "system:serviceaccount:default:alice",
    "namespace": "default",
    "resource": "*",
    "readonly": true
  }
}

Provision this policy.json on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json flags to API servers

This would allow Alice (through her service account) read only rights to all resources in default namespace only.

Solution 2

You say :

I need to enable other users to also administer.

But according to the documentation

Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.

You have to use a third party tool for this.

== Edit ==

One solution could be to manually create a user entry in the kubeconfig file. From the documentation :

# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
    --server=https://1.1.1.1 \
    --certificate-authority=/path/to/apiserver/ca_file \
    --embed-certs=true \
    # Or if tls not needed, replace --certificate-authority and --embed-certs with
    --insecure-skip-tls-verify=true \
    --kubeconfig=/path/to/standalone/.kube/config

# create user entry
$ kubectl config set-credentials $USER_NICK \
    # bearer token credentials, generated on kube master
    --token=$token \
    # use either username|password or token, not both
    --username=$username \
    --password=$password \
    --client-certificate=/path/to/crt_file \
    --client-key=/path/to/key_file \
    --embed-certs=true \
    --kubeconfig=/path/to/standalone/.kube/config

# create context entry
$ kubectl config set-context $CONTEXT_NAME \
    --cluster=$CLUSTER_NICK \
    --user=$USER_NICK \
    --kubeconfig=/path/to/standalone/.kube/config

Solution 3

bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC. https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

Share:
82,414
peterl
Author by

peterl

Updated on July 05, 2022

Comments

  • peterl
    peterl almost 2 years

    I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine.

    I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config, such as:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: REDACTED
        server: https://api.{CLUSTER_NAME}
      name: {CLUSTER_NAME}
    contexts:
    - context:
        cluster: {CLUSTER_NAME}
        user: {CLUSTER_NAME}
      name: {CLUSTER_NAME}
    current-context: {CLUSTER_NAME}
    kind: Config
    preferences: {}
    users:
    - name: {CLUSTER_NAME}
      user:
        client-certificate-data: REDACTED
        client-key-data: REDACTED
        password: REDACTED
        username: admin
    - name: {CLUSTER_NAME}-basic-auth
      user:
        password: REDACTED
        username: admin
    

    I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?

    Also, is it safe to just share the cluster.certificate-authority-data?

  • peterl
    peterl about 7 years
    I read that in the docs as well, but the thing is I created my cluster with Kops and it created the initial admin user, so there must be a way to create another one.
  • Vincent De Smet
    Vincent De Smet about 7 years
    Although, it would be better to provide read-only access using ChatOps, log shipping and manage deployments through CI systems. The only annoying part is, how to enable easy console access to Developers ...
  • Vincent De Smet
    Vincent De Smet about 7 years
    for dashboard access use kubectl proxy & and point to locahost:8001 api/v1/proxy - kubernetes-dashboard service in kube-system namespace
  • peterl
    peterl about 7 years
    Yes, once the user is created in the cluster, you'd use the kubectl config command with set-cluster,set-credentials and set-context instructions as I mentioned in the original question. But how do you create the actual user in the cluster? Where do you get the actual certs supplied along with those instructions?
  • peterl
    peterl about 7 years
    Perfect. That's what I was looking for. One clarification, though: step 3 creates a ca.crt file, but step 6.2 is looking for a ca.pem file. Is some translation required, or was this just a typo?
  • Vincent De Smet
    Vincent De Smet about 7 years
    also note that you could use export KUBECONFIG=alice-config on your machine to generate a single alice-config file (with certs embedded) and just send that to alice (telling her to copy it to ~/kube/config) - but this would complicate her tasks if she needs to manage multiple clusters and contexts
  • peterl
    peterl about 7 years
    Awesome. Many thanks. Maybe you'd even consider writing something up for Kubernetes.io.
  • caarlos0
    caarlos0 about 7 years
    Hi @peterl, I'm having the same doubts... did you you ever solve that?
  • peterl
    peterl about 7 years
    Yes, I used the solution from Vincent De Smet, which worked like a charm.
  • Vincent De Smet
    Vincent De Smet over 6 years
    correct, on OSX man base64 shows -D for decode (uppercase d)
  • gmile
    gmile over 6 years
    @VincentDeSmet you say "better support for user objects is still in the pipeline", do you know if anything has changed since then? Maybe there's an RFC or open PRs/issues in kubernetes/kubernetes?
  • Vincent De Smet
    Vincent De Smet over 6 years
    @gmile - User Identity will never be a part of Kubernetes and instead will be managed through IdentityProviders. k8s supports oidc, and with something like CoreOS/dex you can hook it up to LDAP / OAuth2 providers. (which is what we did github.com/honestbee/dex-app)
  • Blue
    Blue over 5 years
    Don't you mean RBAC? Can you clarify a bit, this is borderline link-only.
  • David Ham
    David Ham over 5 years
    Why a service account? I thought service accounts were for defining pod access, and that for users you would use user accounts?
  • Vincent De Smet
    Vincent De Smet over 5 years
    Hi @DavidHam - please read the full answer - you'd need identity provider for users. service accounts basically provide a JWT for API access and can be used for robots / programs and shouldn't be used for users as highlighted in detail in the answer
  • Brian
    Brian almost 3 years
    I have to run secret=$(kubectl get sa alice -o json | jq -r '.secrets[].name') on my computer to get the secret.