Knative: Interview Questions 1

Knative

  • What are the components of Knative? Knative is made of different components which are configuration and routes.
  • Explain Knative Configuration? Configuration defines the state of the application that you are trying to deploy. Configuration itself  creates Revisions.
  • Revision represent the current state of  your application. Every configuration generates a new revision for every update.

BUILD

  • What is BuildTemplate? Build template is a way to specify parameterized Builder that you can pass values to to create custom build.
  •  What are the basic types of authentication? We have the basic auth which includes a username and password as well as ssh auth.

REVISIONS

  •  What are revisions? They represent the current snapshot state of your configuration.
  • What are the states of a revision? We have active(serving traffic), reserved(0 pods running) and retired(will not receive traffic for now).

AUTOSCALER

  •  Differentiate  Autoscaler and Activator? Autoscaler received concurrency per pod and use that to determine whether to scale the Deployment up or down while Activator acts as catch all for unserved request , brings up new pods using Autoscaler  to server this uncaught request

Kubernetes Questions: Admission Controllers

  • What is an admission controller(ADC)?

An admission controller intercepts requests to the API server that has been authenticated and authorized but yet to be persistent.

  • Describe the types of special admission controller?

We have the MutatingAdmissionControllers(MAC) and ValidatingAdmissionControllers(VAC). MutatingAdmissionControllers make changes to the object before persistency while ValidatingAdmissionControllers validates the object. Some controller can be both. That means they mutate the object(this is the phase that comes first) the validate the object also

  • What happens when an object fails any of the ADC phase?

If an object fails any of the phase, the request is rejected.

  • Give an example of admission controllers and what they do?
  1. ExtendedResourceToleration: This controller mutates a request to add tolerations that allows the pod to tolerate a custom resource provided the resource was requested in the pod spec.
  2. LimitPodHardAntiAffinityTopology: This provides a safety net against DDOS the cluster by preventing any other pods from being scheduled on a node by adding anti-affinity toleration.
  • How do you enable and disable ADC?

Using the  –enable-admission-plugins and –disable-admission-plugins option to the api server and restarting the api server.

 

Kubernetes: Access Through Proxy

Most people in the industry access kubernetes using kubectl command and many are unaware of other options. One of the option is using  curl or wget. In this example, we will be focusing on curl and proxy option. We can proxy our kubernetes command;
*We assumed you have an active kubernetes cluster running either through minikube or some other means.

 kubectl get pods
NAME            READY     STATUS    RESTARTS   AGE
twocontainers   2/2       Running   0          13m

From above, we queried list of pods runnning in default namespace using kubectl command. To use local proxy, simply run;

 kubectl proxy --port=8088 &

Now, to query the number of running pods in default namespace using curl;

 curl -s http://localhost:8088/api/v1/namespaces/default/pods/ | jq '.items[] .metadata.name'
"twocontainers"

The above gave you created pods in the namespace default

Kubernetes: Pod Disruption Budget by Example

Pod Disruption allows you to moderate a pod replica in your cluster without having access to the controller template. In this example, we have a Nginx deployment with 8 replica pods and we want to make sure that at least 7 are in production regardless of quantity of voluntary disruption requests. The next question is, what are involuntary and voluntary disruption request?

Involuntary Disruption are changes in our cluster like node failure, node out of resources, cloud provider disruption, VM shutdown that have effects on the number of pods we have in our cluster while voluntary disruption are requests that are made to our kubernetes api tthat affects pod distribution like node draining, pod eviction etc. This is our Deployment Template;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-redis-deployment
  labels:
        app: webserver
spec:
  replicas: 8
  selector:
    matchLabels:
     app: webserver
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
       - name: nginx
         image: nginx:1.9.1
       - name: redis
         imagePullPolicy: IfNotPresent
         image: redis

As you can see from above, our deployment has 8 replica pods and we want to use PDB to make sure there is at least 7 running at any given time and any request that puts the number of pods below 7 gets denied.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb-demo
spec:
  minAvailable: 7
  selector:
    matchLabels:
      app: webserver

Let’s go ahead and run kubectl apply to create the deployment along with PodDisruptionBudget.

$ kubectl get deployment/nginx-redis-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-redis-deployment   8         8         8            8           1h
$ kubectl get pdb
NAME       MIN-AVAILABLE   MAX-UNAVAILABLE   ALLOWED-DISRUPTIONS   AGE
pdb-demo   7               N/A               1                     7h

Now, lets go ahead and drain the minikube node. You will notice that drain was only able to evict one pod from the minikube node since further eviction will violate our PDB.

$ kubectl get deployment/nginx-redis-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-redis-deployment   8         8         8            7           2h
Friendwithagoat:kubernetes tunde.oladipupo$ kubectl get pods
NAME                                      READY     STATUS    RESTARTS   AGE
nginx-redis-deployment-6b95986b48-86gg5   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-gkkdl   0/2       Pending   0          6m
nginx-redis-deployment-6b95986b48-h8xhk   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-hkntf   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-jq6dz   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-nk8h6   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-qwlm5   2/2       Running   0          2h
nginx-redis-deployment-6b95986b48-w9vkl   2/2       Running   0          2h<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;">&#65279;</span>

Kubernetes: Exposing Services through NodePort

There are different ways to expose a kubernetes cluster container that you are running; ClusterIP, NodePort, Ingress, LoadBalancer. In this tutorial, I will be discussing NodePort using the deployment template;

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: webserver
  ports:
  - name: http
    protocol: TCP
    port: 8080
    targetPort: 80
  - name: redis
    protocol: TCP
    port: 6379
    targetPort: 6379
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-redis-deployment
  labels:
        app: webserver
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
       - name: nginx
         image: nginx:1.9.1
         livenessProbe:
           tcpSocket:
             port: 80
       - name: redis
         imagePullPolicy: IfNotPresent
         image: redis
         livenessProbe:
           tcpSocket:
             port: 6379

If you run kubectl create -f deployment.yml, it creates both service and deployment.  Go ahead and run kubectl describe svc my-service;

Name:			my-service
Namespace:		default
Labels:			
Annotations:		kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"name":"http","port":8080...
Selector:		app=webserver
Type:			NodePort
IP:			10.0.0.188
Port:			http	8080/TCP
NodePort:		http	32044/TCP
Endpoints:		172.17.0.4:80,172.17.0.5:80,172.17.0.6:80
Port:			redis	6379/TCP
NodePort:		redis	31977/TCP
Endpoints:		172.17.0.4:6379,172.17.0.5:6379,172.17.0.6:6379
Session Affinity:	None
Events:

As you can see, the NodeType creates a nodePort on the clusterNode, maps those ports to corresponding Endpoints Pod port(32044 -> 80 and 31977->6379). Next is lets get the node IP and make calls to port 32044 and 31977; kubectl describe nodes

...
Addresses:
  InternalIP:	192.168.99.100
  Hostname:	minikube
Capacity:
 cpu:		2
 memory:	2048076Ki
 pods:		110
...

From above, the cluster node IP is 192.168.99.100. Lets curl port 32044 and telnet 31977.

$ curl 192.168.99.100:32044



Welcome to nginx!

    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }




<h1>Welcome to nginx!</h1>
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.

Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.

<em>Thank you for using nginx.</em>



$ telnet 192.168.99.100 31977
Trying 192.168.99.100...
Connected to 192.168.99.100.
Escape character is '^]'.
SET hello 5
+OK
quit
+OK
Connection closed by foreign host.
$

You will acheve the same result using the service ip and the service port run from directly on the node where your pods are running;




Welcome to nginx!

    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }




<h1>Welcome to nginx!</h1>
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.

Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.

<em>Thank you for using nginx.</em>



$ telnet 10.0.0.188 6379
SET hello 87
+OK
quit
+OK
Connection closed by foreign host
$

As you can see, kube-proxy handles the job of mapping the NodePort(can be generated for you or you specify) to the pods targetPort.

How to configure Kubernetes Lifecycle

In this example. I am going to demonstration the use case for kubernetes life cycle. Lifecycle are used to take actions after our container starts or immediately before the containers exit. This  can be used in cases where we need to take action like clean up our database connection before exiting or copy files to the right location before our container starts using them.

In this example, I will be demonstrating preStop and postStart hook. This is our Pod template;

apiVersion: v1
kind: Pod
metadata:
  name: redis-lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: redis
    imagePullPolicy: IfNotPresent
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo 'This is postStart lol' > ~/lol"]
      preStop:
        exec:
          command: ["/usr/local/bin/redis-cl shutdown && echo Love from the abyss"]

If you run kubectl apply -f pod.yml, the lifecycle-demo-container will be created and we can go ahead and login inside the container using                                                             kubectl exec -it redis-lifecycle-demo — sh

$ kubectl exec -it redis-lifecycle-demo -- sh
# ls
# cat ~/lol
This is postStart lol
# exit

As you can see, our postStart hook successfully ran by creating the lol file(just what came to my mind).  The preStop hook is triggered during pod deletion phase which we have to trigger manually. There is no way to confirm that the Pod preStop run in this case since the Pod itself will be deleted before we can confirm that it was triggered but by running a preStop command that will fail (redis-blablah instead of redis-cli), we can check this . Lets delete the pod using
kubectl delete pod redis-lifecycle-demo and describe it before the pod deletion completes.

kubectl describe pod redis-lifecycle-demo
...
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath					Type		Reason			Message
  ---------	--------	-----	----			-------------					--------	------			-------
  10m		10m		1	default-scheduler							Normal		Scheduled		Successfully assigned redis-lifecycle-demo to minikube
  10m		10m		1	kubelet, minikube							Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "default-token-dk6z1"
  10m		10m		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Pulled			Container image "redis" already present on machine
  10m		10m		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Created			Created container
  10m		10m		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Started			Started container
  9s		9s		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Warning		FailedPreStopHook
  9s		9s		1	kubelet, minikube	spec.containers{lifecycle-demo-container}	Normal		Killing			Killing container with id docker://lifecycle-demo-container:Need to kill Pod

You will see the warning event that indicated that our preStop hook failed. This completed our postStart and preStop demo. Here are some of the things that you need to know about about the hooks.

  • They can be delivered more than once, hence, you need to ensure that you handle multiple run scenario when planning your hook.
  • It is not guaranteed that postStart will  run before you container entrypoint command.

Ansible Interview Questions part 1

  • Explain how Ansible works?

Ansible works by sshing into the host, copying over the modules which executes task needed to bring our system into a desired state. The host from which we ssh from is called the controlling machine and the remote system is called the host

  • Advantages of Ansible over other tools like Chef?

These are some of the advantages of Ansible over Chef:

– It is agentless: You do not need to install agents on host to configure them.                         – It is very easy to pick up                                                                                                                     – Very good performance

  • Explain Ansible galaxy?

Galaxy refers to bothe website and CLI tool used to the interact with the website where you can download and share roles with other members of the ansible communty.

  • How do we make a variable available to a host or group without including it in the inventory file?

You can create a variable file under group_vars. For example, lets say we want to make a variable available too the webserver host group, you simple create group_vars/webservers and define the variable inside the file.

  • Explain Forks in Ansible

Forks is a way to improve your ansible performance defining how many ansible processes will be created to communicate with remote hosts.

  • Explain Pipelining in Ansible.

Pipelining allows Ansible to use stream commands over a single connection instead of opening connection for each ansible command.

  • How can we use controlpersist to speed up ansible deployment?

This allows us to create a single master connection that can be reused subsequently for a given amount of time.

  • Explain “fire and forget” concept in ansible.

This allows us to  run a task without waiting for completion. You simply run the task Async and set poll=0.  Later in the playbook, use async_status to check the status of the job.

  • Why would you want to disable ansible facts.

You can disable facts if it its not being used to save on memory used for storing the variables created during facts/

  • What are ansible strategies?

Ansible strategies are plugins that modifies the way ansible works. For example, the linear strategy executes task on the host in series waiting for all hosts to complete before moving to next task while free moves on to next task once it finish execution on a host. For debug strategy, it executes linear but triggers debugger on failure.