Kubernetes is a container orchestration platform designed with modern applications in mind. Handling modern loads requires modern architecture, and this is where Kubernetes comes into play, specifically for application scaling.

Let's take an example of a workload that increases and decreases exponentially depending upon the time of day. We need to scale the app up and down to handle the traffic according to the load incurred. Kubernetes takes care of this by decreasing the app instances during low traffic and increasing them during high-traffic situations.

This article will teach us to implement a Kubernetes cluster from the ground up. Learning about the different configuration options, issues, troubleshooting and more as we go along.

Let's get started!

Pre-Requisites

  • Local System (4 GB RAM, CPU capable of virtualization)

OR

  • Cloud (1 GB RAM, Linux Operating System) x 2

Setting Up the Fundamentals

For this demonstration, we will use two servers, where I'll take one as a master node and the second as a worker node. We need a minimum of 1GB RAM on the master node and a minimum of 512MB on the worker node. This can also be done on our local system as long as the two servers communicate.

It is generally recommended to set up an odd number of nodes greater than 2 for an optimal cluster configuration. This basically boils down to the proper functioning of Kubernetes features like leader election, quorum and high availability of etcd . We will learn more about these later on.

ℹ️
If you want to run larger workloads, increasing the available RAM on both the master and worker nodes is a good idea.

SSH Setup

The SSH Key can be generated on any machine with the openssl command ssh-keygen.

Output:

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:xyz..random-value root@d9c3306faa23
The key's randomart image is:
+---[RSA 3072]----+
|             =+OB|
|             .B+=|
|          . +.B* |
|          E*.B.oo|
|        S.o== . .|
|         o==+..  |
|          ++o= . |
|           o+ ...|
|          ..oo+o |
+----[SHA256]-----+
SSH-Keygen Output

Once you create the key, copy the contents of the public key (id_rsa.pub) from the master to the worker node, paste it into the authorized_keys file of the other server and vice-versa.

If the file is not present, just create the file with the following command:
touch ~/.ssh/

The public key will authorize the servers to access each other while scheduling pods later.

Component Setup

We'll install the tools required to run the Kubernetes cluster on these machines.

First, let's set up the master node with the required tools.

Installing Tools

kubectl

wget -q --show-progress --https-only --timestamping https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubectl

chmod +x kubectl

sudo mv kubectl /usr/local/bin
Installing kubectl

kube-proxy

wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kube-apiserver"
  
chmod +x kube-apiserver

sudo mv kube-apiserver /usr/local/bin
Installing kube-proxy

kube-controller-manager

wget -q --show-progress --https-only --timestamping \  "https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kube-controller-manager"

chmod +x kube-controller-manager

sudo mv kube-controller-manager /usr/local/bin
Installing kube-controller-manager

kube-scheduler

wget -q --show-progress --https-only --timestamping \  "https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kube-scheduler"

chmod +x kube-scheduler

sudo mv kube-scheduler /usr/local/bin
Installing kube-scheduler

Generating Data Encryption Config

In Kubernetes, various data, such as cluster state, application configurations, and secrets, are stored in the cluster. Kubernetes can encrypt cluster data at rest, i.e., the data stored within etcd.

Now, we'll generate encryption key and make an encryption config to which will be used by kubernetes to secure secrets in a Kubernetes Cluster.

Generating the Key

To generate an encryption key, run the following command:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

✍️
The above command will generate a random string using /dev/urandom and encode it into base64.

Generating Config

Let's create a file called encryption-config.yaml by using the command below.

touch encryption-config.yaml

Copy and paste the following YAML manifest into the encryption-config.yaml file that we just created.

kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
Encryption Configuration YAML Manifest

Now, move the file to /var/lib/kubernetes, which is the directory that stores all the Kubernetes configuration.

Conclusion

In this article, we installed the components that make up a Kubernetes Cluster, configured the encryption key and its corresponding YAML and generated SSH keys to enable inter-server access.

In the next article, we will learn to setup the Certificate Authority (CA), TLS Certificates for all the components and initialize the etcd database!

Thank you for reading! If you find any inaccuracies or have any queries, please comment down below.