The Kubernetes control plane is the central management component of a Kubernetes cluster. It is responsible for maintaining the cluster's desired state, including management of the creation, scaling, and deletion of application components (such as pods, services, and replica sets).

Components of a Control Plane

The control plane consists of several components that work together to orchestrate the cluster.

1.  API server

This is the control plane's central component, which acts as the main entry point for all API requests. The API server validates and persists all changes to the cluster state.

2. etcd

This is a distributed key-value store that stores the configuration data of the cluster. The API server uses the data stored in etcd to determine the cluster's desired state.

3. Controller manager

This component runs various controllers responsible for ensuring that the current state of the cluster matches the desired state. For example, the replication controller ensures that the desired number of replicas of a pod is running.

4. Scheduler

This component is in charge of scheduling pods on nodes based on various resource requirements and constraints.

These components form the control plane and provide the essential functionality required to run and manage a Kubernetes cluster.

Pre-Requisites

  • Generated TLS Certificates for control plane components and CA certificate
  • Created kubeconfigfor control plane components
💡
In a previous article, we covered the installation of individual binaries for the control plane components. You can find it here for reference.

Managing Certificates

Let's create a directory where we will store the configuration files and certificates required by Kubernetes. After that, we'll move the certificates from the home directory to the specific directory responsible for storing TLS certificates.

  sudo mkdir -p /var/lib/kubernetes/pki

  # Only copy CA keys as we'll need them again for workers.
  sudo cp ca.crt ca.key /var/lib/kubernetes/pki
  for cert in kube-apiserver service-account apiserver-kubelet-client etcd-server kube-scheduler kube-controller-manager
  do
    sudo mv "$cert.crt" "$cert.key" /var/lib/kubernetes/pki/
  done
  sudo chown root:root /var/lib/kubernetes/pki/*
  sudo chmod 600 /var/lib/kubernetes/pki/*
Setting up TLS Certificates
✍️
The certificates that we had previously generated in the blog, TLS Certificate Management, will now be moved into the /var/lib/kubernetes/pki directory.

Managing Config Files

ℹ️
Before proceeding with the following steps, we suggest reviewing our blog on generating kubeconfig files for additional context.

Move the generated kubeconfig files into the Kubernetes directory.

sudo mv kube-controller-manager.kubeconfig kube-scheduler.kubeconfig /var/lib/kubernetes/

Also, let's secure the generated kubeconfigs by providing read/write access only to the user that created them.

sudo chmod 600 /var/lib/kubernetes/*.kubeconfig

Setting up Services

1. Configure kube-apiserver

Setup Master Environment Variable using the command below:

MASTER="192.168.1.22"

Set the CIDR Range for Pods and Services, respectively, using the commands below:

POD_CIDR=10.244.0.0/24

SERVICE_CIDR=10.96.0.0/24

Now let's create the kube-apiserver service.

cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${MASTER} \\
  --allow-privileged=true \\
  --apiserver-count=2 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/pki/ca.crt \\
  --enable-admission-plugins=NodeRestriction,ServiceAccount \\
  --enable-bootstrap-token-auth=true \\
  --etcd-cafile=/var/lib/kubernetes/pki/ca.crt \\
  --etcd-certfile=/var/lib/kubernetes/pki/etcd-server.crt \\
  --etcd-keyfile=/var/lib/kubernetes/pki/etcd-server.key \\
  --etcd-servers=https://${MASTER}:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/pki/ca.crt \\
  --kubelet-client-certificate=/var/lib/kubernetes/pki/apiserver-kubelet-client.crt \\
  --kubelet-client-key=/var/lib/kubernetes/pki/apiserver-kubelet-client.key \\
  --runtime-config=api/all=true \\
  --service-account-key-file=/var/lib/kubernetes/pki/service-account.crt \\
  --service-account-signing-key-file=/var/lib/kubernetes/pki/service-account.key \\
  --service-account-issuer=https://${MASTER}:6443 \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/pki/kube-apiserver.crt \\
  --tls-private-key-file=/var/lib/kubernetes/pki/kube-apiserver.key \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Manifest for kube-apiserver service

2. Configure kube-controller-manager

Create the kube-controller-manager service.

cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --allocate-node-cidrs=true \\
  --authentication-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --authorization-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --bind-address=127.0.0.1 \\
  --client-ca-file=/var/lib/kubernetes/pki/ca.crt \\
  --cluster-cidr=${POD_CIDR} \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/pki/ca.crt \\
  --cluster-signing-key-file=/var/lib/kubernetes/pki/ca.key \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --node-cidr-mask-size=24 \\
  --requestheader-client-ca-file=/var/lib/kubernetes/pki/ca.crt \\
  --root-ca-file=/var/lib/kubernetes/pki/ca.crt \\
  --service-account-private-key-file=/var/lib/kubernetes/pki/service-account.key \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Manifest for kube-controller-manager service

3. Configure kube-scheduler

Create the kube-scheduler service.

cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Manifest for kube-scheduler service

Starting Services

Use the commands below to Start, Enable and Register our created services.

#Starts the services
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

#Enables the services
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler

#Registers dependencies required by the newly created services
sudo systemctl daemon-reload
Commands for Starting, Enabling and Registering the Services

Let's check the component status now to verify if everything we've done until now has worked as intended.

kubectl get componentstatuses --kubeconfig admin.kubeconfig

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
Output of Component Status Check

Conclusion

In this article, we learned about and set up the different components that make the Kubernetes control plane.

Please comment down below if you have any queries. I regularly attempt to correct the article to ensure legibility!