Kubernetes

Kubernetes Overview

The Master coordinates all activities in your cluster. The Master controls several Nodes. Each Node provides resources to the Kubernetes cluster.

Every Kubernetes Node runs at least a Kubelet, a process that does communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine. And of course a container runtime like Docker, to pull and run the container.

A Node can have multiple Pods, Pods contain the container with that contains the code to be executed + storage volumes.

You should expect that Pods may be stopped at any time on a Node and restarted on a another one.

  • A Replicaset ensures that a given number of Pods is always running.
  • Deployment provides declarative updates for Pods and ReplicaSets.
  • A Service in Kubernetes groups one or more Pods together to form a logical set of Pods that can than be accessed. Without a Service Pods can not be accessed from the outside. Services can be exposed in different ways by specifying a type in the ServiceSpec
    • ClusterIP (default) Only reachable from within the cluster.
    • NodePort Only from outside the cluster using NodeIP:NodePort.
    • LoadBalancer External load balancer, assigns a fixed, external IP to the Service.
    • ExternalName Exposes the Service using an arbitrary name. No proxy is used.
  • Ingress manages external access to the services in a cluster, providing load balancing, SSL termination and name-based virtual hosting.

Kubernetes commands

Kubernetes Cheatsheet

kubectl cluster-info

Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl get nodes

NAME       STATUS   ROLES    AGE    VERSION
minikube   Ready    master   3h5m   v1.14.1
kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080
kubectl get deployments

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
hello-minikube        1/1     1            1           3h20m
kubernetes-bootcamp   1/1     1            1           17m

Make all pods accessible via network

kubectl proxy

kubectl describe - show detailed information about a resource

kubectl describe pods

kubectl logs - print the logs from a container in a pod

kubectl logs hello-minikube-56cdb79778-ncf9t

kubectl exec - execute a command on a container in a pod

kubectl exec -ti hello-minikube-56cdb79778-ncf9t bash

Fire up a new pod with an interactive shell

kubectl run my-shell --rm -i --tty --image ubuntu -- bash

kubectl get - list resources

kubectl get pods
kubectl get services

Creates a new service of type NodePort for running deployment

kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080

Show deployments

kubectl describe deployment

Name:                   hello-minikube
Labels:                 run=hello-minikube

With this label you can find all pods

kubectl get pods -l run=kubernetes-bootcamp
kubectl get pods -o wide -l run=kubernetes-bootcamp

Delete a service

kubectl delete service -l run=kubernetes-bootcamp

Request redundancy

kubectl scale deployments/kubernetes-bootcamp --replicas=2
kubectl get deployments

Deploy new version v2

kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2

See status of rolling update

kubectl rollout status deployments/kubernetes-bootcamp

Rollback to previous version

kubectl rollout undo deployments/kubernetes-bootcamp

Remove node from cluster

kubectl get nodes
kubectl drain --ignore-daemonsets myNode
systemctl stop kubelet
/etc/init.d/docker stop

To add it again

/etc/init.d/docker start
systemctl start kubelet
kubectl uncordon myNode

Restrict where pods are running

You may want to restrict on which nodes a deployment runs it pods

Easiest way seems to be to add a label to all nodes that say yes / no.

kubectl label nodes node01.example.com i-like-you=false
kubectl label nodes node02.example.com i-like-you=true
kubectl label nodes node03.example.com i-like-you=false
kubectl get nodes --show-labels

In your deployment you need to have something to insist on one or more labels, e.g. like this (the linux one is not from us)

nodeSelector:
beta.kubernetes.io/os: linux
i-like-you: "true"

Make port accessible from outside

  1. Your application needs to listen to a port. Within the docker instance the port exists.
  2. In der Dockerfile expose the port. Outside the docker instance the port exists. Check with docker ps
  3. In the Kubernetes service.yaml define the ports. From the Kubernetes master the port exists
  4. Forward the port. Outside the master the port exists on the master's IP.

services:

ports:
- port: 9000
targetPort: port
name: main
- port: 8787
targetPort: port
name: debug
kubectl get services
kubectl describe services
kubectl port-forward deployment/my-service 8787:8787 --address 0.0.0.0

Install Kubernetes for playing with it

Install VirtualBox

apt-get install virtualbox

Install kubectl, basically adding repository and

apt-get install kubectl

Download the minikube binary

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
  && chmod +x minikube

Start minikube

./minikube start

That will do everything for you

๐Ÿ˜„  minikube v1.0.1 on linux (amd64)
๐Ÿ’ฟ  Downloading Minikube ISO ...
 142.88 MB / 142.88 MB [============================================] 100.00% 0s
๐Ÿคน  Downloading Kubernetes v1.14.1 images in the background ...
๐Ÿ”ฅ  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
๐Ÿ“ถ  "minikube" IP address is 192.168.99.100
๐Ÿณ  Configuring Docker as the container runtime ...
๐Ÿณ  Version of container runtime is 18.06.3-ce
โŒ›  Waiting for image downloads to complete ...
โœจ  Preparing Kubernetes environment ...
๐Ÿ’พ  Downloading kubeadm v1.14.1
๐Ÿ’พ  Downloading kubelet v1.14.1
๐Ÿšœ  Pulling images required by Kubernetes v1.14.1 ...
๐Ÿš€  Launching Kubernetes v1.14.1 using kubeadm ...
โŒ›  Waiting for pods: apiserver proxy etcd scheduler controller dns
๐Ÿ”‘  Configuring cluster permissions ...
๐Ÿค”  Verifying component health .....
๐Ÿ’—  kubectl is now configured to use "minikube"
๐Ÿ„  Done! Thank you for using minikube!

Install Kubernetes for real

Install docker

curl -fsSL get.docker.com | sudo sh
apt-get install docker-ce

Install Kubernetes

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt-get update && apt-get install -y apt-transport-https
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
kubeadm init --token-ttl 0

Configure DNS

kubectl -n kube-system edit configmap coredns

kubectl get pods -n kube-system -oname |grep coredns
pod/coredns-123
kubectl delete pod -n kube-system coredns-123

Install the Kubernetes UI

Install the Kubernetes UI

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

Create an admin user Create a file dashboard-adminuser.yaml with

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

Create a file dashboard-adminrole.yaml with

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

Send the files

kubectl apply -f dashboard-adminuser.yaml
kubectl apply -f dashboard-adminrole.yaml

Get the token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Deploy the UI

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

Make it accessible

kubectl proxy

You can access it locally at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ with the token from the previous step

Control multiple Kubernetes Clusters

Configure Access to Multiple Clusters

Install kubctl and merge the config of several clusters like this into ~/.kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ABC...
    server: https://192.168.1.2:1234
  name: kubernetes-internal
- cluster:
    certificate-authority-data: ABC...
    server: https://172.17.0.5:1234
  name: kubernetes-prod
- cluster:
    certificate-authority: /home/foo/.minikube/ca.crt
    server: https://192.168.2.3:4567
  name: minikube
contexts:
- context:
    cluster: kubernetes-internal
    user: kubernetes-admin-internal
  name: internal
- context:
    cluster: kubernetes-prod
    user: kubernetes-admin-prod
  name: prod
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: internal
kind: Config
preferences: {}
users:
- name: kubernetes-admin-internal
  user:
    client-certificate-data: ABC...
    client-key-data: ABC...
- name: kubernetes-admin-prod
  user:
    client-certificate-data: ABC...
    client-key-data: ABC...
- name: minikube
  user:
    client-certificate: /home/foo/.minikube/client.crt
    client-key: /home/foo/.minikube/client.key

The row in the config with current-context selects which cluster would be used for the next command. You can also change this via kubectl

kubectl config  use-context internal
kubectl config  use-context prod
kubectl config  use-context minikube

You can also have multiple kubectl configuration files and switch like this

kubectl --kubeconfig /root/.kube/some_other_config get pods

Endpoints

Kubernetes has 3 endpoints it can check for you https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ Startup: When a pod is started, Kubernetes waits for this to become up. Before that it does not make sense to do anything for a pod. Readyness: When this is true the pod is ready to do what it is supposed to do. Good for pods that have a long startup time. Liveness: As soon as this is no longer true, the pod should be killed. This is kind of a dead man's switch

Watch out, ready + live is called constantly

https://blog.colinbreck.com/kubernetes-liveness-and-readiness-probes-how-to-avoid-shooting-yourself-in-the-foot/

Deploy services

Docker images

kubectl run --image=debian debian1 sleep 3000
kubectl get pods
kubectl exec -it thorsten-debian1-1234axz-tw532 -- /bin/bash

Helm Kubernetes package management

Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

Install Helm

Add a Helm repository

helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install thorsten-mongodb --set usePassword=false,persistence.enabled=false stable/mongodb

To remove it again

helm uninstall thorsten-mongodb