Hey,

One of the most common questions I get from folks that have never tried Kubernetes before is: how do I even get a cluster to play with?

I’m definitely not the most knowledgeable person in this area (having just tried a few local options), but for the past few months, my answer has been quite straightforward: kind.

If I could summarize in three bullet points the why kind, it’d be:

cheap

kind, which stands for kubernetes in docker, runs (as you might’ve guessed) Kubernetes nodes as containers in Docker, and that’s it.

As long as you have docker installed, you’ll be able to run Kubernetes.

For instance, this is what creating a cluster looks like:

kind create cluster

        -> new container in docker holding k8s control plane + kubelet


docker ps -a

        -> you'll see the kubernetes node there


kubectl create -f ./my-pod.yaml

        --> request to the kube-apiserver inside that container
        --> creates a pod inside that docker container

so essentially you end up with something like this:

linux (phys machine / vm)
  docker:
    kind-control-plane (container in docker)
      containerd:
        pod
          container... (container in containerd in container from docker)
        pod
        pod

That means that if you have Docker in, say, your MacOS machine (which is necessarily running a virtualized Linux machine on a lightweight hypervisor), you can use that existing infrastructure to run Kubernetes too, and not have to have an extra virtual machine just for Kubernetes.

macos
  linux (vm)
   docker:
     kind-control-plane (container in docker)
       containerd:
         pod
           container... (container in containerd in container from docker)
         pod
         pod

If you’re on Linux, that’s even better - it’s just another set of processes running inside a namespaces, cgroups, etc (instead of a whole virtual machine), so you can imagine how quick it is to bring things up.

ps.: the only thing that is not as great for now is the size though - uncompressed, it stays at approximately 1.2GiB, but I imagine efforts regarding trimming down Kubernetes (by, e.g., creating builds that don’t have any of the cloud providers stuff) will help.

if it’s just a container … why do I need kind?

so, I thought: well, it’s just a single container … I can probably just docker run it, can’t it?

well, “not really”.

With execsnoop from bcc, we’re able to see every command that’s issued by kind with regards to docker. Looking at docker run, we can see how bringing up the node looks like:

docker run \
        --detach \
        --hostname kind-control-plane \
        --label io.x-k8s.kind.cluster=kind \
        --label io.x-k8s.kind.role=control-plane \
        --name kind-control-plane \
        --net kind \
        --privileged \
        --publish=127.0.0.1:33015:6443/TCP \
        --restart=on-failure:1 \
        --security-opt apparmor=unconfined \
        --security-opt seccomp=unconfined \
        --tmpfs /run \
        --tmpfs /tmp \
        --tty \
        --volume /lib/modules:/lib/modules:ro \
        --volume /var \
        kindest/node:v1.18.2 

While that might seem very promising, all that’s going on is bringing up a container that has systemd with a containerd service already configured.

Continuing with the output of execsnoop, we can see that kind takes care of other details, like tweaking kubeadm configuration, running kubeadm init, etc1:

docker exec --privileged -i kind-control-plane 

# taking care of giving to the container the final
# merged version of `kubeadm`'s config (I guess)
#
cat /kind/version
mkdir -p /kind
cp /dev/stdin /kind/kubeadm.conf


# initializing kubernetes with the merged config
#
kubeadm init \
        --skip-phases=preflight \
        --config=/kind/kubeadm.conf \
        --skip-token-print --v=6


# starts setting up networking, storage, etc
#
kubectl \
        --kubeconfig=/etc/kubernetes/admin.conf \
        taint nodes \
        --all node-role.kubernetes.io/master-

cat /kind/manifests/default-cni.yaml | \
        kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -

cat /kind/manifests/default-storage.yaml | \
        kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -

cat /etc/kubernetes/admin.conf

so yeah, at the end of the day … yes, you could get a kubernetes cluster up by running the container and then triggering a kubeadm init from inside the container, but that would very likely not get you very far: probably just a kube-apiserver, but certainly not a fully working runtime.

everything is tweakable

No, really, it really is!

Sometime ago I was working on a proof of concept of how concourse could potentially have all of its functionality preserved while switching all of its runtime for kubernetes.

That was clearly a hack, not production-ready at all, but was a quite nice experiment.

Part of that experiment involved rethinking how volume streaming in concourse works, imagining a possible implemntation where a pod that fetches (or builds) a container image becomes a container image registry capable of serving that image.

To do so, I had to tweak Kind’s containerd configuration to allow it to trust registries from anywhere inside the internal network, and guess what? it worked! you can read more about it here:

Again, you probably don’t need this, but, just in case you do want to do something like that, check out their docuemntation on configuring the clusters you create: https://kind.sigs.k8s.io/docs/user/configuration/

self-contained

Having literally everything described in terms of Docker containers means that getting rid of a Kubernetes cluster that you installed who knows what is very simple - delete that container and that’s mostly it (there’ll still be a docker network and a volume, but you can get rid of that super easily too).

from the docuemntation:

State is offloaded into the “node” containers in the form of labels, files in the container filesystem, and processes in the container. The cluster itself stores all state.

No external state stores are used and the only stateful process is the container runtime.

kind does not itself store or manage state.

And this, my friends, is a big deal actually - being working with custom resources all the time, installing all sorts of stuff in the cluster, it’s very important for me to be able to quickly get rid of the cluster and create a new one from scratch without having to care about cleaning things up.

Fortunatelly, kind makes that a breeze.


  1. sure, we could’ve looked up that stuff through source code, but most of the time external inspection has a higher return on time invested ↩︎