Creating a kubernetes cluster with multiple nodes using minkube

minikube multinode cluster docker

6 min read | by Jordi Prats

Minikube is very useful for creating mockup environments for testing purposes, but it can be also use for studying yout CKA certification. Some topics, like node affinity or node failure will require you to have a multinode cluster but you can still use minikube

To start a 3 node cluster using minikube you just need to add the --node option to the start command, for example:

$ minikube start --nodes 3
πŸ˜„  minikube v1.16.0 on Ubuntu 20.04
✨  Automatically selected the docker driver. Other choices: kvm2, none
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
πŸ’Ύ  Downloading Kubernetes v1.20.0 preload ...
    > preloaded-images-k8s-v8-v1....: 491.00 MiB / 491.00 MiB  100.00% 17.20 Mi
πŸ”₯  Creating docker container (CPUs=2, Memory=2633MB) ...
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”—  Configuring CNI (Container Networking Interface) ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass

πŸ‘  Starting node minikube-m02 in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2633MB) ...
🌐  Found network options:
    β–ͺ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    β–ͺ env NO_PROXY=192.168.49.2
πŸ”Ž  Verifying Kubernetes components...

πŸ‘  Starting node minikube-m03 in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2633MB) ...
🌐  Found network options:
    β–ͺ NO_PROXY=192.168.49.2,192.168.49.3
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    β–ͺ env NO_PROXY=192.168.49.2
    β–ͺ env NO_PROXY=192.168.49.2,192.168.49.3
πŸ”Ž  Verifying Kubernetes components...
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Once it finishes you can use kubectl get nodes to check that you have a 3-node cluster:

$ kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
minikube       Ready    control-plane,master   24m   v1.20.0
minikube-m02   Ready    <none>                 22m   v1.20.0
minikube-m03   Ready    <none>                 21m   v1.20.0

What minikube does is to spin up three docker containers, one per node:

$ docker ps
CONTAINER ID   IMAGE                                           COMMAND                  CREATED          STATUS          PORTS                                                                                                      NAMES
8eabef14b49f   gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4   "/usr/local/bin/entr…"   21 minutes ago   Up 21 minutes   127.0.0.1:32787->22/tcp, 127.0.0.1:32786->2376/tcp, 127.0.0.1:32785->5000/tcp, 127.0.0.1:32784->8443/tcp   minikube-m03
7b62f81d236a   gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4   "/usr/local/bin/entr…"   22 minutes ago   Up 22 minutes   127.0.0.1:32783->22/tcp, 127.0.0.1:32782->2376/tcp, 127.0.0.1:32781->5000/tcp, 127.0.0.1:32780->8443/tcp   minikube-m02
afd1211a1178   gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4   "/usr/local/bin/entr…"   24 minutes ago   Up 24 minutes   127.0.0.1:32779->22/tcp, 127.0.0.1:32778->2376/tcp, 127.0.0.1:32777->5000/tcp, 127.0.0.1:32776->8443/tcp   minikube

We can easily simulate a node failure by stopping one of the worker containers. Before doing it, let's create a deployment with 6 replicas to simulate our pods:

$ kubectl create deployment demo --image=nginx --replicas=6
deployment.apps/demo created

Pods will be distributed between the two worker nodes we have:

$ kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
demo-997c454df-kj4j7   1/1     Running   0          47s   10.244.2.2   minikube-m03   <none>           <none>
demo-997c454df-pzks9   1/1     Running   0          47s   10.244.1.3   minikube-m02   <none>           <none>
demo-997c454df-q7br7   1/1     Running   0          47s   10.244.1.2   minikube-m02   <none>           <none>
demo-997c454df-s4zrg   1/1     Running   0          47s   10.244.2.4   minikube-m03   <none>           <none>
demo-997c454df-shtgs   1/1     Running   0          47s   10.244.1.4   minikube-m02   <none>           <none>
demo-997c454df-v8c2q   1/1     Running   0          47s   10.244.2.3   minikube-m03   <none>           <none>

Now we can simulate a crash on one of the nodes by stopping one of the worker nodes, since the docker containers have it's node name as the container name it's quite simple:

$ docker stop minikube-m03
minikube-m03

Shortly afterwards, with kubectl get nodes we will be able to see that the cluster realizes that the node it not longer in Ready state:

$ kubectl get nodes
NAME           STATUS     ROLES                  AGE    VERSION
minikube       Ready      control-plane,master   4h8m   v1.20.0
minikube-m02   Ready      <none>                 4h6m   v1.20.0
minikube-m03   NotReady   <none>                 4h5m   v1.20.0

Despite that, pods won't move immediately, we can use kubectl describe on one of the pods that where on the failed node to see the events:

$ kubectl describe pod demo-997c454df-kj4j7
Name:         demo-997c454df-kj4j7
Namespace:    default
Priority:     0
Node:         minikube-m03/192.168.49.4
Start Time:   Tue, 05 Jan 2021 16:16:14 +0100
Labels:       app=demo
              pod-template-hash=997c454df
Annotations:  <none>
Status:       Running
IP:           10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/demo-997c454df
Containers:
  nginx:
    Container ID:   docker://48f40340c742d6aeb03613633108c1ef3791bea219f769288a8f1f825483b981
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:4cf620a5c81390ee209398ecc18e5fb9dd0f5155cd82adcbae532fec94006fb9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 05 Jan 2021 16:16:44 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lscdz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-lscdz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lscdz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Normal   Scheduled         5m58s                  default-scheduler  Successfully assigned default/demo-997c454df-kj4j7 to minikube-m03
  Normal   Pulling           5m53s                  kubelet            Pulling image "nginx"
  Normal   Pulled            5m34s                  kubelet            Successfully pulled image "nginx" in 18.580908133s
  Normal   Created           5m28s                  kubelet            Created container nginx
  Normal   Started           5m26s                  kubelet            Started container nginx
  Warning  DNSConfigForming  2m52s (x6 over 5m58s)  kubelet            Search Line limits were exceeded, some search paths have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local int.compumark.com dresources.com markmonitor.com
  Warning  NodeNotReady      91s                    node-controller    Node is not ready

Once we can see the NodeNotReady on the pod's events, it won't try to recover the lost pods for 5 minutes (it is configurable via the --pod-eviction-timeout flag on the controller manager) After 5 minutes it will terminate the dead pods, and recreate new pods on the other node to be able to achieve the deployment's desired state:

$ kubectl get pods -o wide
NAME                   READY   STATUS        RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
demo-997c454df-d8ng4   1/1     Running       0          27s     10.244.1.7   minikube-m02   <none>           <none>
demo-997c454df-kj4j7   1/1     Terminating   0          9m59s   10.244.2.2   minikube-m03   <none>           <none>
demo-997c454df-ksk9d   1/1     Running       0          27s     10.244.1.5   minikube-m02   <none>           <none>
demo-997c454df-pzks9   1/1     Running       0          9m59s   10.244.1.3   minikube-m02   <none>           <none>
demo-997c454df-q7br7   1/1     Running       0          9m59s   10.244.1.2   minikube-m02   <none>           <none>
demo-997c454df-s4zrg   1/1     Terminating   0          9m59s   10.244.2.4   minikube-m03   <none>           <none>
demo-997c454df-shtgs   1/1     Running       0          9m59s   10.244.1.4   minikube-m02   <none>           <none>
demo-997c454df-v8c2q   1/1     Terminating   0          9m59s   10.244.2.3   minikube-m03   <none>           <none>
demo-997c454df-wf5p7   1/1     Running       0          27s     10.244.1.6   minikube-m02   <none>           <none>

Posted on 19/01/2021