Kubernetes
Kubernetes Overview
The Master coordinates all activities in your cluster. The Master controls several Nodes. Each Node provides resources to the Kubernetes cluster.
Every Kubernetes Node runs at least a Kubelet, a process that does communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine. And of course a container runtime like Docker, to pull and run the container.
A Node can have multiple Pods, Pods contain the container with that contains the code to be executed + storage volumes.
You should expect that Pods may be stopped at any time on a Node and restarted on a another one.
- A Replicaset ensures that a given number of Pods is always running.
- Deployment provides declarative updates for Pods and ReplicaSets.
- A Service in Kubernetes groups one or more Pods together to form a logical set of Pods that can than be accessed. Without a Service Pods can not be accessed from the outside. Services can be exposed in different ways by specifying a type in the ServiceSpec
- ClusterIP (default) Only reachable from within the cluster.
- NodePort Only from outside the cluster using NodeIP:NodePort.
- LoadBalancer External load balancer, assigns a fixed, external IP to the Service.
- ExternalName Exposes the Service using an arbitrary name. No proxy is used.
- Ingress manages external access to the services in a cluster, providing load balancing, SSL termination and name-based virtual hosting.
Kubernetes commands
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
NAME STATUS ROLES AGE VERSION
minikube Ready master 3h5m v1.14.1
NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 3h20m
kubernetes-bootcamp 1/1 1 1 17m
Make all pods accessible via network
kubectl describe - show detailed information about a resource
kubectl logs - print the logs from a container in a pod
kubectl exec - execute a command on a container in a pod
Fire up a new pod with an interactive shell
kubectl get - list resources
kubectl get services
Creates a new service of type NodePort for running deployment
Show deployments
Name: hello-minikube
Labels: run=hello-minikube
With this label you can find all pods
kubectl get pods -o wide -l run=kubernetes-bootcamp
Delete a service
Request redundancy
kubectl get deployments
Deploy new version v2
See status of rolling update
Rollback to previous version
Remove node from cluster
kubectl drain --ignore-daemonsets myNode
systemctl stop kubelet
/etc/init.d/docker stop
To add it again
systemctl start kubelet
kubectl uncordon myNode
Restrict where pods are running
You may want to restrict on which nodes a deployment runs it pods
Easiest way seems to be to add a label to all nodes that say yes / no.
kubectl label nodes node02.example.com i-like-you=true
kubectl label nodes node03.example.com i-like-you=false
kubectl get nodes --show-labels
In your deployment you need to have something to insist on one or more labels, e.g. like this (the linux one is not from us)
beta.kubernetes.io/os: linux
i-like-you: "true"
Make port accessible from outside
- Your application needs to listen to a port. Within the docker instance the port exists.
- In der Dockerfile expose the port. Outside the docker instance the port exists. Check with docker ps
- In the Kubernetes service.yaml define the ports. From the Kubernetes master the port exists
- Forward the port. Outside the master the port exists on the master's IP.
services:
- port: 9000
targetPort: port
name: main
- port: 8787
targetPort: port
name: debug
kubectl describe services
kubectl port-forward deployment/my-service 8787:8787 --address 0.0.0.0
Install Kubernetes for playing with it
Install VirtualBox
Install kubectl, basically adding repository and
&& chmod +x minikube
Start minikube
That will do everything for you
๐ฟ Downloading Minikube ISO ...
142.88 MB / 142.88 MB [============================================] 100.00% 0s
๐คน Downloading Kubernetes v1.14.1 images in the background ...
๐ฅ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
๐ถ "minikube" IP address is 192.168.99.100
๐ณ Configuring Docker as the container runtime ...
๐ณ Version of container runtime is 18.06.3-ce
โ Waiting for image downloads to complete ...
โจ Preparing Kubernetes environment ...
๐พ Downloading kubeadm v1.14.1
๐พ Downloading kubelet v1.14.1
๐ Pulling images required by Kubernetes v1.14.1 ...
๐ Launching Kubernetes v1.14.1 using kubeadm ...
โ Waiting for pods: apiserver proxy etcd scheduler controller dns
๐ Configuring cluster permissions ...
๐ค Verifying component health .....
๐ kubectl is now configured to use "minikube"
๐ Done! Thank you for using minikube!
Install Kubernetes for real
Install docker
apt-get install docker-ce
Install Kubernetes
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt-get update && apt-get install -y apt-transport-https
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
kubeadm init --token-ttl 0
Configure DNS
kubectl get pods -n kube-system -oname |grep coredns
pod/coredns-123
kubectl delete pod -n kube-system coredns-123
Install the Kubernetes UI
Create an admin user Create a file dashboard-adminuser.yaml with
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Create a file dashboard-adminrole.yaml with
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Send the files
kubectl apply -f dashboard-adminrole.yaml
Get the token
Deploy the UI
Make it accessible
You can access it locally at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ with the token from the previous step
Control multiple Kubernetes Clusters
Configure Access to Multiple Clusters
Install kubctl and merge the config of several clusters like this into ~/.kube/config
clusters:
- cluster:
certificate-authority-data: ABC...
server: https://192.168.1.2:1234
name: kubernetes-internal
- cluster:
certificate-authority-data: ABC...
server: https://172.17.0.5:1234
name: kubernetes-prod
- cluster:
certificate-authority: /home/foo/.minikube/ca.crt
server: https://192.168.2.3:4567
name: minikube
contexts:
- context:
cluster: kubernetes-internal
user: kubernetes-admin-internal
name: internal
- context:
cluster: kubernetes-prod
user: kubernetes-admin-prod
name: prod
- context:
cluster: minikube
user: minikube
name: minikube
current-context: internal
kind: Config
preferences: {}
users:
- name: kubernetes-admin-internal
user:
client-certificate-data: ABC...
client-key-data: ABC...
- name: kubernetes-admin-prod
user:
client-certificate-data: ABC...
client-key-data: ABC...
- name: minikube
user:
client-certificate: /home/foo/.minikube/client.crt
client-key: /home/foo/.minikube/client.key
The row in the config with current-context selects which cluster would be used for the next command. You can also change this via kubectl
kubectl config use-context prod
kubectl config use-context minikube
You can also have multiple kubectl configuration files and switch like this
Endpoints
Kubernetes has 3 endpoints it can check for you https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ Startup: When a pod is started, Kubernetes waits for this to become up. Before that it does not make sense to do anything for a pod. Readyness: When this is true the pod is ready to do what it is supposed to do. Good for pods that have a long startup time. Liveness: As soon as this is no longer true, the pod should be killed. This is kind of a dead man's switch
Watch out, ready + live is called constantly
Deploy services
Docker images
kubectl get pods
kubectl exec -it thorsten-debian1-1234axz-tw532 -- /bin/bash
Helm Kubernetes package management
Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Add a Helm repository
To remove it again