Skip to content

Persistent volumes and persistent volume claims

To inorder to bind pvc with a pv, accessMode and storageClassName of both should be same.

PV:

spec:
 hostPath:  
   path:    

PODS

labels

show all the labels of pods
kubectl get pods --show-labels

Change the labels of pod 'nginx' to be app=v2
kubectl label pod nginx app=v2 --overwrite

Get the label 'app' for the pods
kubectl get pods --label-columns=app

Get only the 'app=v2' pods
kubectl get pods --selector=app=v2

Remove the 'app' label from the nginx pod
kubectl label pod nginx app-

Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'
First add the label to the node
kubectl label node <nodename> accelerator=nvidia-tesla-p100
use the 'nodeSelector' property on the Pod YAML. under nodeSElector give the label.

To know where to write nodeSelector in the yaml file.
kubectl explain po.spec

Annotate pod nginx with "description='my description'" value
kubectl annotate pod nginx description='my description'

check the annotations for pod nginx
kubectl describe pod nginx | grep -i 'annotations'

remove the annotation
kubectl annotate pod description-

check how the deployment rollout is going
kubectl rollout status deploy <deploymentname>

check the rollout history
kubectl rollout history deploy <deploymentname>

undo the latest rollout
kubectl rollout undo deploy <deploymentname>

Return the deployment to the second revision (number 2)
kubectl rollout undo deploy <deploymentname> --to-revision=2

Check the details of the fourth revision
kubectl rollout history deploy <deploymentname> --revision=4

Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%
kubectl autoscale deploy <deploymentname> --min=5 --max=10 --cpu-percent=80

pause the rollout of the deployment
kubectl rollout pause deploy <deploymentname>

Create Horzontal pod autoscaler for deployment nginx that maintains between 1 and 10 replicas of the Pods, targetting CPU utilization at 80%
kubectl autoscale deploy nginx --min=1 --max=10 --cpu-percent=80

Delete the deployment and the horizontal pod autoscaler you created
kubectl delete deploy nginx
kubectl delete hpa nginx
or
kubectl delete deploy/nginx hpa/nginx

Create a job with image perl
kubectl create job pi --image=perl

Create a job with image perl that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"
kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'

Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute
kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'while true; do echo hello; sleep 10; done'
Add activeDeadlineSeconds=30 under job spec section in yaml file and create the job.

configmaps

from literals
kubectl create cm map1 --from-literal=var1=val1

from file
echo -e 'var1=val1\nvar2=val2' > config.txt
kuebctl create cm map2 --from-file=config.txt

from env file
echo -e 'var1=val1\nvar2=val2' > config.env
kuebctl create cm map3 --from-file=config.env

from a file, giving the key special
kubectl create cm map4 --from-file=special=config.txt

Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'
kubectl create cm options --from-literal=var5=val5
kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
Add

env:
 - name: option
   valueFrom:
      configMapkeyRef:
        name: options  
        key: var5
under spec.containers

Secrets

Create a secret with the values password=mypass
kubectl create secret generic mysecret --from-literal=password=mypass

from file
kubectl create secret generic mysecret --from-file=finename

To get the value of the secret
echo <data.username> | base64 -D

PODS

Create an nginx pod and list the pod with different levels of verbosity

kubectl run nginx --image=nginx
kubectl get pod nginx --v=7
kubectl get pod nginx --v=8

List the nginx pod with custom columns POD_NAME and POD_STATUS

kubectl get pod nginx -o yaml | less
kubectl get po -o=custom-columns="POD_NAME:.metadata.name, POD_STATUS:.status.containerStatuses[].state"

List all the pods sorted by name

kubectl get pods --sort-by=.metadata.name

List all the pods sorted by created timestamp

kubectl get pod nginx -o yaml | less

Output the yaml file of the pod you just created without the cluster-specific information

kubectl get po nginx -o yaml --export > 1.yaml

Create a Pod with three busy box containers with commands “ls; sleep 3600;”, “echo Hello World; sleep 3600;” and “echo this is the third container; sleep 3600” respectively and check the status

kubectl run busybox1 --image=busybox --dry-run=client -o yaml -- /bin/sh -c "ls; sleep 3600" > multi.yaml
Edit the file and add remaining containers kubectl create -f multi.yaml

To check it's logs
kubectl logs busybox1 -c <containername>

Run command ls in the third container busybox3 of the above pod

kubectl exec busybox -c busybox3 -- /bin/sh -c 'ls' or
kubectl exec busybox -c busybox3 -- ls

Show metrics of the above pod containers and puts them into the file.log and verify

kubectl top pod busybox --containers
but this is not working with error get services http:heapster
Have to check on this

Create a Pod with main container busybox and which executes this “while true; do echo ‘Hi I am from Main container’ >> /var/log/index.html; sleep 5; done” and with sidecar container with nginx image which exposes on port 80. Use emptyDir Volume and mount this volume on path /var/log for busybox and on path /usr/share/nginx/html for nginx container. Verify both containers are running.

We can create a pod using imeprative command and add conatiners and volumes to the yaml file and then create multi-container pod using yaml file..

Exec into both containers and verify that index.html exist and query the index.html from sidecar container with curl localhost

kubectl exec -it multi-container-pod -c main-con -- /bin/sh -c 'cat /var/log/index.html'

kubectl exec -it multi-container-pod -c sidecar-con -- /bin/sh -c 'cat /usr/share/nginx/html/index.html'

with localhost
kubectl exec -it multi-container-pod -c sidecar-con -- /bin/sh
# apt-get update && apt-get install -y curl
# curl localhost

Pod Design (20%)

Get the pods with label information

kubectl get pods --show-labels

labelling(env=dev) the pod using imperative command

kubectl run nginx --image=busybox --labels=env=dev

Get the pods with label env=dev

kubectl get pods -l env=dev

Get the pods with label env=dev and also output the labels

kubectl get pods -l env=dev --show-labels

Get the pods with label env

kubectl get pods -L env

Get the pods with labels env=dev and env=prod

kubectl get pods -l 'env in (dev,prod)'

change the label for the pod

kubectl label pod nginx env=uat --overwrite

remove the labels for the pod

kubectl label pod nginx env-

Add the label for the pod

kubectl label pod nginx env=dev

To label the node

kubectl label node <nodename> <label>
same as for pod

Eg: Label the node (minikube if you are using) nodeName=nginxnode

kubectl label node minikube nodeName=nginxnode

Create a Pod that will be deployed on this node with the label nodeName=nginxnode

kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml

Add

nodeSelector:
  nodeName=nginxnode 
under spec section in yaml file

kubectl create -f pod.yaml

Verify the pod that it is scheduled with the node selector

kubectl describe po nginx | grep Node-Selectors

Annotate the pods with name=webapp

kubectl annotate pod nginx name=webapp

Get the pods of this deployment

First check the labels of the deployment
kubectl get deployment <name> --show-labels

As the labels of deployment and pods in that will be same
kubectl get pods -l <labels>

Get the deployment rollout status

kubectl rollout status deployment webapp

Get the replicaset that created with the deployment

First we need to know th labels of the deployment and use them to get the rs. kubectl get rs -l <labels>

to watch any action

use -w option at the end of the command

Check the rollout history

kubectl rollout history deployment <name>

undo the deployment to previous version

kubectl rollout undo deployment <name>

To record the changes of the deployment for example changing image

kubectl set image deployment <name> <containername>=<newimage> --record

undo the deployment to a particular previous versio

kubectl rollout undo deployment <name> --to-revision=<number>

Check the history of the specific revision of that deployment

kubectl rollout history deployment <name> --revision=<number>

Pause the rollout of the deployment

kubectl rollout pause deployment <name>

Resume the rollout of the deployment

kubectl rollout resume deployment <name>

When we make any changes to the deployment after pausing the rollout, the changes will be done in the deployment but the revision cannot be seen in the rollout history.
After resuming the rollout, we can see the revisions of changes done in the rollout history.

Apply the autoscaling to this deployment with minimum 10 and maximum 20 replicas and target CPU of 85% and verify hpa is created and replicas are increased to 10 from 1

kubectl autoscale deployment <name> --min=10 --max=20 --cpu-percent=85
kubectl get hpa
kubectl get pods -l <deployment labels>

Create a Job with an image node which prints node version and also verifies there is a pod created for this job

kubectl create job node --image=node -- /bin/sh -c 'node -v'
kubectl get jobs
kubectl get pods

We can logs only for only.. kubectl get pods <podname> will give version of the node

Create a job with busybox image which echos “Hello I am from job” and make it run 10 times one after one

kubectl create job hello-job --image=busybox --dry-run=client -o yaml -- echo 'Hello I am from job' > job.yaml
then add completions:10 under spec section of yaml file
kubectl create -f job.yaml

In this case, ten jobs and ten pods will be created.

Create a Cronjob with busybox image that prints date and hello from kubernetes cluster message for every minute

kubectl create cronjob date-job --image=busybox --schedule='*/1 * * * *' -- /bin/sh -c "date; echo Hello from cluster"

In this case, cronjob creates a separate job and pods for every minute.

To delete this cronjob kubectl delete cj date-job

State Persistence

To bind persistant volume claim with a persistent volume,
accessMode and storageClassName should be same.

To check whethet pvc is bound or not
kubectl get pvc

Create a Pod with an image Redis and configure a volume that lasts for the lifetime of the Pod

emptyDir is the volume that lasts for the life of the pod

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    volumeMounts:
    - name: redis-storage
      mountPath: /data/redis
  volumes:
  - name: redis-storage
    emptyDir: {}

Exec into the above pod and create a file named file.txt with the text ‘This is called the file’ in the path /data/redis and open another tab and exec again with the same pod and verifies file exist in the same path.

kubectl exec -it redis -- /bin/sh # cd /data/redis echo 'This is called the file' > file.txt

kubectl exec -it redis -- /bin/sh # cat /data/redis/file.txt

Configuration

List all the configmaps in the cluster

kubectl get cm

Create a configmap called myconfigmap with literal value appname=myapp

kubectl create cm myconfigmap --from-literal=appname=myapp

Create a file called config.txt with two values key1=value1 and key2=value2 and verify the file

 $ cat > config.txt << EOF
heredoc> key1=value1
heredoc> key2=value2
heredoc> EOF

Create a configmap named keyvalcfgmap and read data from the file config.txt and verify that configmap is created correctly

kubectl create cm keyvalcfgmap --from-file=config.txt

Create an nginx pod and load environment values from the above configmap keyvalcfgmap and exec into the pod and verify the environment variables and delete the pod

kubectl run nginx --image=nginx --dry-run=client -o yaml > cm.yaml

then add

envFrom:
  - configMapRef:
     name: keyvalcfgmap
under spec.containers in yaml file

kubectl create -f cm.yaml

To execute and verify the environment variables
kubectl exec -it nginx -- env

Create an env file file.env with var1=val1 and create a configmap envcfgmap from this env file and verify the configmap

echo var1=val1 > file.env kubectl create cm envcfgmap --from-env-file=file.env
kubectl get cm envcfgmap -o yaml

Create an nginx pod and load environment values from the above configmap envcfgmap and exec into the pod and verify the environment variables and delete the pod

kubectl create nginx --image=nginx --dry-run=client -o yaml > cm1.yaml

then add

env:
  - name: ENVIRONMENT
    valueFrom:
      configMapKeyRef:
        name: envcfgmap
        key: var1
under spec.containers in yaml file

kubectl create -f cm1.yaml

kubectl exec -it nginx -- env

Create a configmap called cfgvolume with values var1=val1, var2=val2 and create an nginx pod with volume nginx-volume which reads data from this configmap cfgvolume and put it on the path /etc/cfg

kubectl create cm cfgvolume --from-literal=var1=val1, --from-literal=var2=val2
kubectl describe cfgvolume -- to check the values

kubectl create nginx --image=nginx --dry-run=client -o yaml > cm.yaml

then add

volumes:
  - name: nginx-volume
    configMap:
       name: cfgvolume 
under spec section

volumeMounts:
  name: nginx-volume
  mountPath: /etc/cfg
under spec.containers

kubectl create -f cm.yaml

kubectl exec -it nginx -- /bin/sh # cd /etc/cfg # ls

Create a pod called secbusybox with the image busybox which executes command sleep 3600 and makes sure any Containers in the Pod, all processes run with user ID 1000 and with group id 2000 and verify.

kubectl run secbusybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'sleep 3600' > busybox.yaml

then add

securityContext:
   runAsUser: 1000
   runAsGroup: 2000
under spec section

kubectl create -f busybox.yaml

kubectl exec -it secbusybox -- /bin/sh # id This will show the userid and groupid

Create the same pod as above this time set the securityContext for the container as well and verify that the securityContext of container overrides the Pod level securityContext.

In busybox.yaml, add

 securityContext:
      runAsUser: 2000
under spec.containers also

kubectl delete pod secbusybox --grace-period=0 --force

kubectl create -f busybox.yaml

kubectl exec -it secbusybox -- /bin/sh
# id
we can see container securityContext overides the Pod level

Create pod with an nginx image and configure the pod with capabilities NET_ADMIN and SYS_TIME verify the capabilities

kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

the add

securityContext:
  capabilities:
    add: ["NET_ADMIN", "SYS_TIME"]
under spec.containers section

To view the capabilities for process1 kubectl exec -it nginx -- /bin/sh # cd /proc/1
cat status

Create a Pod nginx and specify a memory request and a memory limit of 100Mi and 200Mi respectively.

kubectl run nginx --image=nginx --request='memory=100Mi' --limits='memory=200Mi'

If the requested and limited memory is 100Gi and 200Gi(too big for the nodes) respectively, pod will be in pending state beacuse of insufficient memory.

Create a Pod nginx and specify a CPU request and a CPU limit of 0.5 and 1 respectively.

kubectl run nginx --image=nginx --requests='cpu=0.5' --limits='cpu=1'

Create a secret mysecret with values user=myuser and password=mypassword

kubectl create secret generic mysecret --from-literal=user=myuser --from-literal=password=mypassword

To view the secrets
kubectl get secret mysecret -o yaml
To view the encoded data
echo <encoded> | base64 -d

Create an nginx pod which reads username as the environment variable

kubectl run nginx --image=nginx --dry-run=client -o yaml > secret.yaml

then Add

    env:
      - name: USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: user
under spec.container section of yaml file

kubectl create -f secret.yaml

kubectl exec -it nginx -- env

Create an nginx pod which loads the secret as environment variables

kubectl run nginx --image=nginx --dry-run=client -o yaml > secret.yaml

then Add

      envFrom:
      - secretRef:
          name: mysecret
under spec.container section of yaml file

kubectl create -f secret.yaml

kubectl exec -it nginx -- env

List all the service accounts in the default namespace

kubectl get sa

List all the service accounts in all namespaces

kubectl get sa --all-namespaces

Create a busybox pod which executes this command sleep 3600 with the service account admin and verify

kubectl run busybox --image=busybox --serviceaccount='admin' -- /bin/sh -c 'sleep 3600' We can only use service acoounts which are available in the cluster

We can create a service account -- check documentation

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: build-robot
EOF

kubectl run busybox --image=busybox --serviceaccount='build-robot' -- /bin/sh -c 'sleep 3600'

Create an nginx pod with containerPort 80 and it should only receive traffic only it checks the endpoint / on port 80 and verify and delete the pod.

kubectl run nginx --image=nginx --port=80 --dry-run=client -o yaml > read.yaml

then add,

readinessProbe:
  httpGet:
    path: /
    port: 80
under spec.containers section

kubectl create -f read.yaml

To verify
kubectl describe pod nginx | grep -i readiness

Create an nginx pod with containerPort 80 and it should check the pod running at endpoint / healthz on port 80 and verify and delete the pod.

kubectl run nginx --image=nginx --port=80 --dry-run=client -o yaml > read.yaml

then add,

livenessProbe:
  httpGet:
    path: /healthz
    port: 80
under spec.containers section

kubectl create -f read.yaml

To verify
kubectl describe pod nginx | grep -i liveliness

Check what all are the options that we can configure with readiness and liveness probes

kubectl explain Pod.spec.containers.livenessProbe kubectl explain Pod.spec.containers.readinessProbe

Create an nginx pod with containerPort 80 and it should check the pod running at endpoint /healthz on port 80 and it should only receive traffic only it checks the endpoint / on port 80. verify the pod. It should wait for 20 seconds before it checks liveness and readiness probes and it should check every 25 seconds.

kubectl run nginx --image=nginx --port=80 --dry-run=client -o yaml > read.yaml

then add,

readinessProbe:
  initialDelaySeconds: 20
  periodSeconds: 25
  httpGet:
    path: /
    port: 80
livenessProbe:
  initialDelaySeconds: 20
  periodSeconds: 25
  httpGet:
    path: /healthz
    port: 80

under spec.containers section

kubectl create -f read.yaml

To verify
kubectl describe pod nginx | grep -i liveliness kubectl describe pod nginx | grep -i readiness


multi-container pod, load config map, volume mounts

kubectl run nginx --image=nginx -- /bin/sh -c 'echo 'Hi' >> text.txt'

kubectl exec -it nginx -- /bin/sh

cd volumeMount/text.txt

json

3 pods in a namespace -- application, middletire, database

Ingress - 6% 2 services,

create an ingress resource, which is going to connect two services /var/


How to get the pattern of yaml for any object

kubectl explain Ingress --recursive| less

Create a busybox pod with this command “echo I am from busybox pod; sleep 3600;” and verify the logs.

kubectl run busybox --image=busybox -- /bin/sh -c 'echo I am from busybox pod; sleep 3600'
kubectl logs busybox

List all the events sorted by timestamp and put them into file.log and verify

kubectl get events --sort-by=.metadata.creationTimestamp

Create a clusterIp service for pod nginx

kubectl expose po nginx --port=80 --target-port=9376

Create a nodePort service for pod nginx

kubectl expose po nginx --port=80 --type=NodePort

Create the temporary busybox pod and hit the service. Verify the service that it should return the nginx page index.html.

kubectl get svc nginx -o wide
Get IP address

kubectl run busybox --image=busybox --rm -it -- wget -o- <IPaddress>:80

Create a NetworkPolicy which denies all ingress traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress