Okay Hello again everybody!!!

Now we’re going straight up into more advance and interesting topic which is Daemon Sets, Static Pod, Scheduler Events and the basic for monitoring and logging in Kubernetes Cluster. Just about 2 more article we’re going to the next chapter of the article, leaving the Kubernetes Fundamental one. Okay don’t give up on your dream! Keep it the learning up before you become a Kubernetes Administrator.

Lets go!

1. Daemon Set

In case we want to put one same pod on every node, we use Daemon Sets

Daemon Sets and other component logic

What are the use case?

  1. Monitoring & Logging
  2. Networking
  3. Kube-Proxy

What are the difference between replica set?

Replica set are set with a different pods, and not sure to put in every node. While Daemon Set use Node Affinity and Taint/Toleration to put every pod on a node.

How to configure it?

It’s really similar with replica set format, here’s how for monitoring use case:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: monitoring-daemon
spec:
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      containers:
      - name: monitoring-agent
        image: monitoring-agent
  selector:
    matchLabels:
      app: monitoring-agent

And to check it we use : kubectl get daemonsets or kubectl describe daemonsets monitoring-daemon

2. Static Pods

Static pod on kubelet.

Is an Independent pod that runs and controlled by Kubelet component on a node. There’s no kube-api-server to control it.

What are the use cases?

  1. The static pods exist because of the core component can run independently. This is important for pod that live in master node.
  2. If you want to make a usual static pod, in other node here’s the step:
    • Run kubectl get nodes -o wide
    • SSH to the node that your static pod are deployed
    • Find a /var/lib/kubelet/config.yaml file
    • Search for the static pod path
    • Go to that path and there you’ll find the location of your static pod

How to configure it?

For example here we use image busybox:

kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml

If we want to change the image just simply edit the static pod definition file and save it. Or just run: 

kubectl run --restart=Never --image=busybox:1.28.4 static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml

3. Scheduler Events

Different scheduler on other kube cluster

Sometimes your application need some other custom scheduler to satisfy the performance. So in this case, you can have more than one scheduler in one cluster for specific app.

How to deploy Additional Scheduler?

Run: wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler

For kube-scheduler.service :

ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--scheduler-name= default-scheduler

For my-scheduler.service :

ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--scheduler-name= my-scheduler

How to make your own scheduler?

Copy kube-scheduler.yaml from the directory /etc/kubernetes/manifests/ to any other location and then change the name to my-scheduler.

Add or update the following command arguments in the YAML file:

...
- --leader-elect=false
- --port=10282
- --scheduler-name=my-scheduler
- --secure-port=0
...

Here, we are setting leader-elect to false for our new custom scheduler called my-scheduler. We are also making use of a different port 10282 which is not currently in use in the control plane .The default scheduler uses secure-port on port 10259 to serve HTTPS with authentication and authorization. This is not needed for our custom scheduler, so we can disable HTTPS by setting the value of secure-port to 0.

Finally, because we have set secure-port to 0, replace HTTPS with HTTP and use the correct ports under liveness and startup probes.

The final YAML file would look something like this:

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    component: my-scheduler
    tier: control-plane
  name: my-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=false
    - --port=10282
    - --scheduler-name=my-scheduler
    - --secure-port=0
    image: k8s.gcr.io/kube-scheduler:v1.19.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10282
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10282
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly:truehostNetwork:truepriorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

Run kubectl create -f my-scheduler.yaml and wait sometime for the container to be in running state.

How to use your custom scheduler?

Set scheduler Name property on pod specification as my-scheduler.

---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  schedulerName: my-scheduler
  containers:
  - image: nginx
    name: nginx

Run kubectl create -f nginx-pod.yaml

4. Monitoring – Logging

Monitoring

This monitoring feature enable developer to know stats accurately from the metric server that retrieve the data from either pod or node.

What do you monitor?

  1. Node The number of nodes, the healthy node, performance metric ( CPU, Memory, Network and Disk Utilization ), etc.
  2. Pod Performance metrics for each pod, the number of the pods, etc.

How monitoring works?

Metrics server and kubelet both are connected.

Inside of kubelet, there’s cAdvisor. It responsible to retrieving metrics from pods and send it to metric server through the Kubelet API. This metric server retrieve metrics from each each pods and nodes, then stores the IN-MEMORY monitoring solution (not stored it on the disk).

How to configure it?

  • In minikube do this: minikube addons enable metrics-server
  • In other way do this: git clone https://github.com/kubernetes-incubator/metrics-server.git
  • Then kubectl create -f deploy/1.8+/

To view the node stats just run : kubectl top node

Logging

Logging is showing the event directly from the pod/node that we want. This makes us know what happen in the background.

How to use it?

To livestream log the pod just use: kubectl logs -f pod.yaml

To livestream log the pod that have multiple container just use : kubectl logs -f <pod name> -c <container name>


I think that’s all for now. Again I suggest you to search for another source to really understand the concept for these topic. And you really need to practice more, in order for you to understand the flow. Okay for the next article we’re going to discuss more about Application Lifecycle Management.

Soo keep your head’s up and See You !!

If you want to know more, checkout this wonderful video!

Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]

TechWorld with Nana
Last modified: May 5, 2022

Author

Comments

Write a Reply or Comment

Your email address will not be published.