In this article, we will set up an etcd database in our Kubernetes Cluster. The etcd database will be responsible for storing the state data of components in our cluster. 
Pre-Requisite
- TLS Certificates generated for etcd
Bootstrapping the etcd Cluster
Components in Kubernetes are stateless. The cluster state is stored in the etcd database that is installed in our cluster. 
Install etcd
Let's proceed to set up the etcd database inside our cluster using the instruction(S) provided below.
wget -q --show-progress --https-only --timestamping "https://github.com/coreos/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz
Now, install the etcd server and etcdctl command line utility by using the following commands
tar -xvf etcd-v3.5.3-linux-amd64.tar.gz
sudo mv etcd-v3.5.3-linux-amd64/etcd* /usr/local/bin
Configure etcd
We'll now set up the certificate files we generated in the last article for our etcd cluster.
Use the commands below to set it up.
  sudo mkdir -p /etc/etcd /var/lib/etcd /var/lib/kubernetes/pki
  sudo cp etcd-server.key etcd-server.crt /etc/etcd/
  sudo cp ca.crt /var/lib/kubernetes/pki/
  sudo chown root:root /etc/etcd/*
  sudo chmod 600 /etc/etcd/*
  sudo chown root:root /var/lib/kubernetes/pki/*
  sudo chmod 600 /var/lib/kubernetes/pki/*
  sudo ln -s /var/lib/kubernetes/pki/ca.crt /etc/etcd/ca.crtetcdSet the ETCD Name the same as the hostname of the node it's being installed in.
ETCD_NAME=$(hostname -s)
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
MASTER=$(dig +short master)
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the specific instance it's running on.
Start etcd Using a Service
Let's create a service file for etcd.
vim /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/etcd-server.crt \\
  --key-file=/etc/etcd/etcd-server.key \\
  --peer-cert-file=/etc/etcd/etcd-server.crt \\
  --peer-key-file=/etc/etcd/etcd-server.key \\
  --trusted-ca-file=/etc/etcd/ca.crt \\
  --peer-trusted-ca-file=/etc/etcd/ca.crt \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster master-1=https://${MASTER_1}:2380,master-2=https://${MASTER_2}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.targetetcd Service FileLet's start up the etcd cluster.
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcdetcd ServiceVerify the etcd Cluster Installation
Now, let's verify that the installation was a success.
sudo ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.crt \
  --cert=/etc/etcd/etcd-server.crt \
  --key=/etc/etcd/etcd-server.keyetcd InstallationOutput:
45bf9ccad8d8900a, started, master-2, https://192.168.56.12:2380, https://192.168.56.12:2379etcd Installation VerificationSuccess! Our etcd cluster is functioning as intended.
Frequently Asked Questions:
What is the use of etcd?
etcd stores the state of different components in the Kubernetes cluster. The components in the control plane use session data available in etcd to schedule and remove pods in the cluster.
Why is a stateful workload like etcd deployed in Kubernetes?
The main drawback of running stateful workloads in Kubernetes is storage. As it was built with stateless workloads in mind, running stateful workloads will present a different set of challenges. However, etcd stores its data (which is relatively low in size) in memory and runs on the control plane only, so it's suitable for use in Kubernetes.
How can I take backups of my etcd cluster?
A backup of the etcd cluster can be taken as:
ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=<ca-file> \
  --cert=<cert-file> \
  --key=<key-file> \
  snapshot save <backup-file-location>etcd DatabaseExample of Backup Command:
ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  snapshot save /opt/backup/etcd.dbetcd BackupVerifying the backup:
ETCDCTL_API=3 etcdctl --write-out=table snapshot status /opt/backup/etcd.dbetcd BackupOutput:
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| b7147656 |    51465 |       1099 |     5.1 MB |
+----------+----------+------------+------------+etcd Backup Verification CommandRestore etcd
We have the backup in the /opt/backup/etcd.db location. Now, we will use the snapshot backup to restore etcd.
Here is the command to restore etcd:
ETCDCTL_API=3 etcdctl snapshot restore <backup-file-location>etcd Database Backup into ClusterLet's execute the etcd restore command. In my case /opt/backup/etcd.db is the backup file.
ETCDCTL_API=3 etcdctl snapshot restore /opt/backup/etcd.dbetcd Database RestorationIf you wish to restore to a specific directory in the cluster, then the --data-dir flag can be used to achieve it.
ETCDCTL_API=3 etcdctl --data-dir /opt/etcd snapshot restore /opt/backup/etcd.dbetcd Database in a Specific DirectoryConclusion
In this article, we set up the etcd database, generated service files and learned to backup and restore the etcd database.
Please comment down below if you have any queries or concerns, this article will be periodically maintained for authenticity!
