How to create a local development environment for Kubernetes?

30,216

Solution 1

Update (2016-07-15)

With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development.


You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.

Solution 2

We've been working on a tool to do this. Basic idea is you have remote Kubernetes cluster, effectively a staging environment, and then you run code locally and it gets proxied to the remote cluster. You get transparent network access, environment variables copied over, access to volumes... as close as feasible to remote environment, but with your code running locally and under your full control.

So you can do live development, say. Docs at http://telepresence.io

Solution 3

The sort of "hot reload" is something we have plans to add, but is not as easy as it could be today. However, if you're feeling adventurous you can use rsync with docker exec, kubectl exec, or osc exec (all do the same thing roughly) to sync a local directory into a container whenever it changes. You can use rsync with kubectl or osc exec like so:

# rsync using osc as netcat
$ rsync -av -e 'osc exec -ip test -- /bin/bash' mylocalfolder/ /tmp/remote/folder

Solution 4

EDIT 2022: By now, there are obviously dozens of way to provision k8s, unlike 2015 when we started using it. kubeadm, microk8s, k3s, kube-spray, etc.

My advice: (If your cluster can't fit on your workstation/laptop,) Rent a Hetzner server for 40 euro a month, and run WSL2 if on Windows.

Set up k8s cluster on the remote machine (with any of the above, I prefer microk8s these days). Set up Docker and Telepresence on your local Linux/Mac/WSL2 env. Install kubectl and connect it to the remote cluster.

Telepresence will let you replace a remote pod with a local docker pod, with access to local files (hopefully the same git repo that's used to build the pod you're developing/replacing), and possibly nodemon (or other language-specific auto-source-code-reload system).

Write bash functions. I cannot stress this enough, this will save you hundreds of hours of time. If replacing the pod and starting to develop isn't one line / two words, then you're doing it not-well-enough.


2016 answer below:

Another great starting point is this Vagrant setup, esp. if your host OS is Windows. The obvious advantages being

  • quick and painless setup
  • easy to destroy / recreate the machine
  • implicit limit on resources
  • ability to test horizontal scaling by creating multiple nodes

The disadvantages - you need lot of RAM, and VirtualBox is VirtualBox... for better or worse.

A mixed advantage / disadvantage is mapping files through NFS. In our setup, we created two sets of RC definitions - one that just download a docker image of our application servers; the other with 7 extra lines that set up file mapping from HostOS -> Vagrant -> VirtualBox -> CoreOS -> Kubernetes pod; overwriting the source code from the Docker image.

The downside of this is NFS file cache - with it, it's problematic, without it, it's problematically slow. Even setting mount_options: 'nolock,vers=3,udp,noac' doesn't get rid of caching problems completely, but it works most of the time. Some Gulp tasks ran in a container can take 5 minutes when they take 8 seconds on host OS. A good compromise seems to be mount_options: 'nolock,vers=3,udp,ac,hard,noatime,nodiratime,acregmin=2,acdirmin=5,acregmax=15,acdirmax=15'.

As for automatic code reload, that's language specific, but we're happy with Django's devserver for Python, and Nodemon for Node.js. For frontend projects, you can of course do a lot with something like gulp+browserSync+watch, but for many developers it's not difficult to serve from Apache and just do traditional hard refresh.

We keep 4 sets of yaml files for Kubernetes. Dev, "devstable", stage, prod. The differences between those are

  • env variables explicitly setting the environment (dev/stage/prod)
  • number of replicas
  • devstable, stage, prod uses docker images
  • dev uses docker images, and maps NFS folder with source code over them.

It's very useful to create a lot of bash aliases and autocomplete - I can just type rec users and it will do kubectl delete -f ... ; kubectl create -f .... If I want the whole set up started, I type recfo, and it recreates a dozen services, pulling the latest docker images, importing the latest db dump from Staging env and cleaning up old Docker files to save space.

Solution 5

I've just started with Skaffold

It's really useful to apply changes in the code automatically to a local cluster.

To deploy a local cluster, the best way is Minikube or just Docker for Mac and Windows, both includes a Kubernetes interface.

Share:
30,216
Wernight
Author by

Wernight

Video games / applications / website developer in C++ / C# / Python / PHP using UML, Agile/Scrum, and Unit Testing. From software implementation to business talk, passing by project management.

Updated on April 14, 2021

Comments

  • Wernight
    Wernight about 3 years

    Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such).

    During development you want to be as close as possible to production environment with some important changes:

    • Deployed locally (or at least somewhere where you and only you can access)
    • Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman).

    Similarly one may want a non-public environment to do continuous integration.

    Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?

  • Wernight
    Wernight about 9 years
    By itself hot reload is and should be handled by the web framework you use, here yeoman usually sets that up. What is missing is how to enable it. It requires a local volume to be mounted. If @Robert's answser works it should be a valid solution.
  • Jatin
    Jatin almost 8 years
    Docs say it's not the recommended method method anymore and that "Minikube is the recommended method of running Kubernetes on your local machine."
  • harryz
    harryz over 7 years
    I don't think minikube is suitable for developing k8s itself, am I right?
  • Robert Bailey
    Robert Bailey over 7 years
    It depends on what you are developing. There are many parts of k8s where it's reasonable to use minikube for development. If you are working on pod networking security policies or CNI plugins though it wouldn't make much sense.
  • Pwnosaurus
    Pwnosaurus over 6 years
    "Kubernetes locally via Docker" link is broken. Anyone have an update?
  • Robert Bailey
    Robert Bailey over 6 years
    Minikube replaced the local docker setup a while back and the documentation for the local docker version has subsequently been removed. Does Minikube work for your needs? You can also use kubeadm inside of a VM to instantiate a local single node cluster.
  • Attila Szeremi
    Attila Szeremi almost 4 years
    > Make sure your system doesn't have any docker or kubelet service running. But I already have Docker installed locally, and I'm running containers apart from Kubernetes. Does that mean I can't install microk8s locally?
  • mh-cbon
    mh-cbon over 2 years
    it has been more than a year now, does it still stand ? I want to get started to kubernetes, looking forward for a solution, i like those properties of microk8s. But I dont want to make my life harder than needed to get the job done.
  • Prafull Ladha
    Prafull Ladha about 2 years
    Yes it still stands and microk8s has released support for windows and macos as well. You can check it out here microk8s.io