Originally developed by Google, since its introduction in 2014, Kubernetes has grown to be on of the largest and most popular open source project in the world.
Built for distributed systems suitable for cloud developers of all scales, it is meant to support reliable and scalable software systems.
Just like Docker Swarm, Kubernetes is a container orchestrator.
Why Docker Swarm is not enough?
Docker Swarm do support large production deployments at scale, while Kubernetes natively support it.
docker-compose.yml
file.docker stack deploy -c docker-compose.yml <stack_name>
Docker primitives to build the environment
kubectl
, to manage the environment.kubectl create -f configuration-file.yaml
Instruction valid for every resource in the Kubernetes ecosystem
NAME SHORTNAMES API VERSION NAMESPACED KIND
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
apiservices apiregistration.k8s.io/v1 false APIService
replicasets rs apps/v1 true ReplicaSet
deployments deploy apps/v1 true Deployment
statefulsets sts apps/v1 true StatefulSet
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
nodes metrics.k8s.io/v1beta1 false NodeMetrics
pods metrics.k8s.io/v1beta1 true PodMetrics
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
And the list goes on…
Services
can scale their replica number,
The cluster can grow its capacity,
Metrics Server
plugin to enable the monitoring,Fine grained access control based on RBAC,
Resources are externally managed with kubectl
tool,
Cluster resources can be split logically into namespaces
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello
image: busybox:latest
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello
image: busybox:latest
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello
image: busybox:latest
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello
image: busybox:latest
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
CPU resources are expressed in millicpu units.
A millicpu, or millicore, is equivalent to 1/1000th of a CPU core.
0.1 = 10% = 100m
Memory resources are measured in bytes.
You can express memory as a plain integer or with quantity suffixes, for example E, P, T, G, M, k.
You can also use the power-of-two equivalents, such as Ei, Pi, Ti, Gi, Mi, Ki.
128974848 = 129e6 = 129M = 123Mi
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello
image: busybox:latest
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Pods
are not directly managed by the user, but through higher-level objects called Deployment
.
ReplicaSets
and Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The connection between Deployments
and Pods
is done using the selector
field
Pods
that the Deployment
will manageThe object that manages the Pod
number
Deployment
Pods
up and downDeployment
object manages a ReplicaSet
, users can scale dynamically the Pod
's number using:
kubectl scale deployment <name> --replicas=<number>
Service discovery in Kubernetes is done with Service objects.
How do they work? They exploit the labels selectors.
Name | Description |
---|---|
ClusterIP |
Exposes the Service on a cluster-internal IP, only reachable from within the cluster. This is the default value. |
NodePort |
Exposes the Service on the Node’s IP with a fixed port. |
LoadBalancer |
Exposes the server externally using a load balancer. This type of Service is not offered directly from Kubernetes. |
Deployments
are not the only solution:
Object that runs short-lived, one-off tasks. Useful to for things to do once, and then stop, for example a database migration.
CronJob is meant for performing regular scheduled actions such as backups, report generation, and so on.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected.
Enforced through the Service Account
resource object, which:
Namespace
kubectl
Several tools support the installation of a production-ready Kubernetes cluster, because is a complex process
For testing purposes, Kubernetes can be installed on a single machine using minikube
minikube start --driver='virtualbox' --extra-config=kubelet.housekeeping-interval=10s
The driver specifies where to install the kubernetes infrastructure, in this case inside a virtual machine managed by VirtualBox
(that is pre-installed on the machine).
Admissible values are:
virtualbox
,kvm2
,qemu2
,vmware
,docker
,none
,ssh
,podman
. Default isauto-detect
The
--extra-config
param is used to configure Kubernetes’kubelet
during the initial startup.The flag
kubelet.housekeeping-interval
specifies the frequency at which the kubelet evaluates eviction thresholds,we need it to execute correctly the example provided in the next slides.
$ minikube start
😄 minikube v1.32.0 on Arch 23.1.0
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube" ...
🐳 Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
▪️ kubelet.housekeeping-interval=10s
🔗 Configuring bridge CNI (Container Networking Interface) ...
▪️ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
kubectl
environment
kubectl
installed!Command | Description |
---|---|
minikube start |
Start the cluster |
minikube stop |
Stop the cluster |
minikube delete |
Delete the cluster |
minikube status |
Show the status of the cluster |
minikube dashboard |
Expose the builtin dashboard in localhost |
minikube addons list/enable/disable |
Lists/Enables/Disables available plugins into the cluster |
The Kubernetes command-line tool, kubectl
, allows you to run commands against Kubernetes clusters.
After a successful installation, you can verify that kubectl is correctly connected to our minikube cluster
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.59.101:8443
CoreDNS is running at https://192.168.59.101:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl
connect to the cluster?
~/.kube/config
file, or also use kubectl config view
.
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/anitvam/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.59.101:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/anitvam/.minikube/profiles/minikube/client.crt
client-key: /home/anitvam/.minikube/profiles/minikube/client.key
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/anitvam/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.59.101:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/anitvam/.minikube/profiles/minikube/client.crt
client-key: /home/anitvam/.minikube/profiles/minikube/client.key
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/anitvam/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.59.101:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/anitvam/.minikube/profiles/minikube/client.crt
client-key: /home/anitvam/.minikube/profiles/minikube/client.key
kubectl config use-context
command.apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/anitvam/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.59.101:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Mon, 18 Dec 2023 12:59:17 CET
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/anitvam/.minikube/profiles/minikube/client.crt
client-key: /home/anitvam/.minikube/profiles/minikube/client.key
First of all, we need to enable the metrics-server
, which is disabled by default
minikube addons enable metrics-server
commandkubectl top nodes
commandUntil the command
kubectl top
does not work properly, the metrics server is not enabled yet.
The operation may take a while.
Now we can deploy our first Application
Deployment
and a Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
selector:
matchLabels:
run: php-apache
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: registry.k8s.io/hpa-example
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apache
kubectl apply -f <file>
command.
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
Horizontal Pod Autoscaler
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
Now, we have to simulate a huge workload
kubectl run
allows to create a Pod on-the-flykubectl run -it load-generator --rm --image=busybox:latest -- /bin/sh -c "while sleep 0.01; do wget -q -O - http://php-apache; done"
Horizontal Pod Autoscaler
status kubectl get hpa php-apache --watch
command.
$ kubectl get hpa php-apache --watch
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 10m
php-apache Deployment/php-apache 129%/50% 1 10 1 11m
php-apache Deployment/php-apache 129%/50% 1 10 3 11m
php-apache Deployment/php-apache 138%/50% 1 10 3 12m
php-apache Deployment/php-apache 85%/50% 1 10 3 13m
php-apache Deployment/php-apache 85%/50% 1 10 6 13m
php-apache Deployment/php-apache 65%/50% 1 10 6 14m
php-apache Deployment/php-apache 51%/50% 1 10 6 15m
php-apache Deployment/php-apache 11%/50% 1 10 6 16m
php-apache Deployment/php-apache 0%/50% 1 10 6 17m
php-apache Deployment/php-apache 0%/50% 1 10 6 20m
php-apache Deployment/php-apache 0%/50% 1 10 2 21m
php-apache Deployment/php-apache 0%/50% 1 10 2 21m
php-apache Deployment/php-apache 0%/50% 1 10 1 22m