Sunday, 29 December 2019
Building a K8s Cluster with Kubeadm
We build a K8s cluster for managing containers and use Kubeadm to simplify the proess of setting up a simple cluster.
# Install Docker on all three nodes
Add the Docker GPG Key:
```
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```
Add the Docker Repository
```
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
```
Update packages:
```
sudo apt-get update
```
Install Docker:
```
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
```
Hold Docker at this specific version:
```
sudo apt-mark hold docker-ce
```
Verify that Docker is up and running with:
```
sudo systemctl status docker
```
After running above commands, the Docker service status should be active (running).
```
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-10-20 11:15:32 UTC; 19s ago
Docs: https://docs.docker.com
Main PID: 9869 (dockerd)
Tasks: 21
CGroup: /system.slice/docker.service
├─9869 /usr/bin/dockerd -H fd://
└─9894 docker-containerd --config /var/run/docker/containerd/containerd.toml
```
# Install Kubeadm, Kubelet, and Kubectl
Install the K8s components by running the following commands on all three nodes.
Add the K8s GPG Key
```
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
```
Add the K8s repo
```
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
```
Update packages
```
sudo apt-get update
```
Install ``kubelet``, ``kubeadm`` and ``kubectl``
```
sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
```
Hold the Kubernetes components at this specific version
```
sudo apt-mark hold kubelet kubeadm kubectl
```
# Bootstrap the cluster on the Kube master node
Initialize kubeadm on the master node
```
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
```
After a few minutes, you should see a ``kubeadm join`` command that will be used later
```
kubeadm join 10.0.1.101:6443 --token ioxxtp.zugcxykam7jhmlqe --discovery-token-ca-cert-hash sha256:1feab8ca98d50689b5a524c1271b43a7c712d66dab0d6ab7b68c9fd472921731
```
Set up the local kubeconfig
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Verify if Kube master node is up and running
```
kubectl version
```
You should see ``Client Version`` and ``Server Version`` as below
```
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-25T02:52:13Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.10", GitCommit:"e3c134023df5dea457638b614ee17ef234dc34a6", GitTreeState:"clean", BuildDate:"2019-07-08T03:40:54Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
```
## Join the two Kube worker nodes to the cluster
Once the Kube master node is ready, then we need to join those two Kube worker nodes to the cluster.
Copy the ``kubeadm join`` command that was printed by ``kubeadm init`` command in the previous step. Make sure you run it with ``sudo``
```
sudo kubeadm join 10.0.1.101:6443 --token ioxxtp.zugcxykam7jhmlqe --discovery-token-ca-cert-hash sha256:1feab8ca98d50689b5a524c1271b43a7c712d66dab0d6ab7b68c9fd472921731
```
Go back to the Kube master node, check if the nodes are joined the cluster successfully or not
```
kubectl get nodes
```
Verify the result. Three of nodes are expected to be here but in the ``NotReady`` state.
```
NAME STATUS ROLES AGE VERSION
ip-10-0-1-101 NotReady master 30s v1.12.2
ip-10-0-1-102 NotReady 8s v1.12.2
ip-10-0-1-103 NotReady 5s v1.12.2
```
# Setu up cluster networking
To get them ready, we need to use flannel because K8s does not provide any defalt network implementation. Flannel is a very simple overlay network that satisfies the Kubernetes requirements. Many people have reported success with Flannel and Kubernetes.
Turn on iptables bridge calls on all three nodes
```
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
```
Apply flannel on Kube master node
```
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
```
Once flannel is installed. Verifiy the node status.
```
kubectl get nodes
```
After a short time, all three nodes should be in the ``Ready`` state.
```
NAME STATUS ROLES AGE VERSION
ip-10-0-1-101 Ready master 85s v1.12.2
ip-10-0-1-102 Ready 63s v1.12.2
ip-10-0-1-103 Ready 60s v1.12.2
```
Subscribe to:
Post Comments (Atom)
A Fun Problem - Math
# Problem Statement JATC's math teacher always gives the class some interesting math problems so that they don't get bored. Today t...
-
SHA stands for Secure Hashing Algorithm and 2 is just a version number. SHA-2 revises the construction and the big-length of the signature f...
-
Contest Link: [https://www.e-olymp.com/en/contests/19775](https://www.e-olymp.com/en/contests/19775) Full Solution: [https://github.com/...
No comments:
Post a Comment