Kubernetes: Pod Disruption Budget by Example

Pod Disruption allows you to moderate a pod replica in your cluster without having access to the controller template. In this example, we have a Nginx deployment with 8 replica pods and we want to make sure that at least 7 are in production regardless of quantity of voluntary disruption requests. The next question is, what are involuntary and voluntary disruption request?

Involuntary Disruption are changes in our cluster like node failure, node out of resources, cloud provider disruption, VM shutdown that have effects on the number of pods we have in our cluster while voluntary disruption are requests that are made to our kubernetes api tthat affects pod distribution like node draining, pod eviction etc. This is our Deployment Template;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-redis-deployment
  labels:
        app: webserver
spec:
  replicas: 8
  selector:
    matchLabels:
     app: webserver
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
       - name: nginx
         image: nginx:1.9.1
       - name: redis
         imagePullPolicy: IfNotPresent
         image: redis

As you can see from above, our deployment has 8 replica pods and we want to use PDB to make sure there is at least 7 running at any given time and any request that puts the number of pods below 7 gets denied.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb-demo
spec:
  minAvailable: 7
  selector:
    matchLabels:
      app: webserver

Let’s go ahead and run kubectl apply to create the deployment along with PodDisruptionBudget.

$ kubectl get deployment/nginx-redis-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-redis-deployment   8         8         8            8           1h
$ kubectl get pdb
NAME       MIN-AVAILABLE   MAX-UNAVAILABLE   ALLOWED-DISRUPTIONS   AGE
pdb-demo   7               N/A               1                     7h

Now, lets go ahead and drain the minikube node. You will notice that drain was only able to evict one pod from the minikube node since further eviction will violate our PDB.

$ kubectl get deployment/nginx-redis-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-redis-deployment   8         8         8            7           2h
Friendwithagoat:kubernetes tunde.oladipupo$ kubectl get pods
NAME                                      READY     STATUS    RESTARTS   AGE
nginx-redis-deployment-6b95986b48-86gg5   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-gkkdl   0/2       Pending   0          6m
nginx-redis-deployment-6b95986b48-h8xhk   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-hkntf   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-jq6dz   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-nk8h6   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-qwlm5   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-w9vkl   2/2       Running   0          2h<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;">&#65279;</span>

Kubernetes: Exposing Services through NodePort

There are different ways to expose a kubernetes cluster container that you are running; ClusterIP, NodePort, Ingress, LoadBalancer. In this tutorial, I will be discussing NodePort using the deployment template;

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: webserver
  ports:
  - name: http
    protocol: TCP
    port: 8080
    targetPort: 80
  - name: redis
    protocol: TCP
    port: 6379
    targetPort: 6379
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-redis-deployment
  labels:
        app: webserver
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
       - name: nginx
         image: nginx:1.9.1
         livenessProbe:
           tcpSocket:
             port: 80
       - name: redis
         imagePullPolicy: IfNotPresent
         image: redis
         livenessProbe:
           tcpSocket:
             port: 6379

If you run kubectl create -f deployment.yml, it creates both service and deployment.  Go ahead and run kubectl describe svc my-service;

Name:			my-service
Namespace:		default
Labels:			
Annotations:		kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"name":"http","port":8080...
Selector:		app=webserver
Type:			NodePort
IP:			10.0.0.188
Port:			http	8080/TCP
NodePort:		http	32044/TCP
Endpoints:		172.17.0.4:80,172.17.0.5:80,172.17.0.6:80
Port:			redis	6379/TCP
NodePort:		redis	31977/TCP
Endpoints:		172.17.0.4:6379,172.17.0.5:6379,172.17.0.6:6379
Session Affinity:	None
Events:

As you can see, the NodeType creates a nodePort on the clusterNode, maps those ports to corresponding Endpoints Pod port(32044 -> 80 and 31977->6379). Next is lets get the node IP and make calls to port 32044 and 31977; kubectl describe nodes

...
Addresses:
  InternalIP:	192.168.99.100
  Hostname:	minikube
Capacity:
 cpu:		2
 memory:	2048076Ki
 pods:		110
...

From above, the cluster node IP is 192.168.99.100. Lets curl port 32044 and telnet 31977.

$ curl 192.168.99.100:32044



Welcome to nginx!

    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }




<h1>Welcome to nginx!</h1>
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.

Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.

<em>Thank you for using nginx.</em>



$ telnet 192.168.99.100 31977
Trying 192.168.99.100...
Connected to 192.168.99.100.
Escape character is '^]'.
SET hello 5
+OK
quit
+OK
Connection closed by foreign host.
$

You will acheve the same result using the service ip and the service port run from directly on the node where your pods are running;




Welcome to nginx!

    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }




<h1>Welcome to nginx!</h1>
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.

Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.

<em>Thank you for using nginx.</em>



$ telnet 10.0.0.188 6379
SET hello 87
+OK
quit
+OK
Connection closed by foreign host
$

As you can see, kube-proxy handles the job of mapping the NodePort(can be generated for you or you specify) to the pods targetPort.

How to configure Kubernetes Lifecycle

In this example. I am going to demonstration the use case for kubernetes life cycle. Lifecycle are used to take actions after our container starts or immediately before the containers exit. This  can be used in cases where we need to take action like clean up our database connection before exiting or copy files to the right location before our container starts using them.

In this example, I will be demonstrating preStop and postStart hook. This is our Pod template;

apiVersion: v1
kind: Pod
metadata:
  name: redis-lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: redis
    imagePullPolicy: IfNotPresent
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo 'This is postStart lol' > ~/lol"]
      preStop:
        exec:
          command: ["/usr/local/bin/redis-cl shutdown && echo Love from the abyss"]

If you run kubectl apply -f pod.yml, the lifecycle-demo-container will be created and we can go ahead and login inside the container using                                                             kubectl exec -it redis-lifecycle-demo — sh

$ kubectl exec -it redis-lifecycle-demo -- sh
# ls
# cat ~/lol
This is postStart lol
# exit

As you can see, our postStart hook successfully ran by creating the lol file(just what came to my mind).  The preStop hook is triggered during pod deletion phase which we have to trigger manually. There is no way to confirm that the Pod preStop run in this case since the Pod itself will be deleted before we can confirm that it was triggered but by running a preStop command that will fail (redis-blablah instead of redis-cli), we can check this . Lets delete the pod using
kubectl delete pod redis-lifecycle-demo and describe it before the pod deletion completes.

kubectl describe pod redis-lifecycle-demo
...
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath					Type		Reason			Message
  ---------	--------	-----	----			-------------					--------	------			-------
  10m		10m		1	default-scheduler							Normal		Scheduled		Successfully assigned redis-lifecycle-demo to minikube
  10m		10m		1	kubelet, minikube							Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "default-token-dk6z1"
  10m		10m		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Pulled			Container image "redis" already present on machine
  10m		10m		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Created			Created container
  10m		10m		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Started			Started container
  9s		9s		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Warning		FailedPreStopHook
  9s		9s		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Killing			Killing container with id docker://lifecycle-demo-container:Need to kill Pod

You will see the warning event that indicated that our preStop hook failed. This completed our postStart and preStop demo. Here are some of the things that you need to know about about the hooks.

  • They can be delivered more than once, hence, you need to ensure that you handle multiple run scenario when planning your hook.
  • It is not guaranteed that postStart will  run before you container entrypoint command.