Setup Kubernetes 1.14 Cluster on CentOS 7.6

A computer screen with the words set kubernetes 1 cluster on kubernetes 7.

This tutorial will showcase the step-by-step process of setting up a Kubernetes 1.14 Cluster, allowing organizations and communities to leverage exciting new features that have been eagerly anticipated. Especially when it comes to Windows containers. Kubernetes now offers windows containers out of the box and allows you to add windows nodes to a Kubernetes cluster.

NOTE: Please make sure swap is disabled on master and worker nodes for Kubernetes setup to be successful. You can follow the guide from here.

Prerequisites

  • Swap is disabled.
  • You should know how to install CentOS 7 and knowledge of sudo user.

Master node Tutorial

System updates

Let’s go ahead and first update our Linux system with all security patches or any other upgrades that will ensure our system is up-to-date.

sudo yum update -y

After your system has been updated, we are now ready to set up a Kubernetes cluster. We will first set up Docker and then setup Kubernetes.

Install and setup Master and Worker Nodes

Please ensure you have applied the following steps to both master and worker nodes before moving on to steps specific to each node. The below steps are common for both master and worker nodes.

Install and Setup Docker

Execute the below command to install Docker.

sudo yum install -y docker

Now we need to enable and start Docker as a service.

sudo systemctl enable docker && sudo systemctl start docker

To verify if you have Docker version 1.13 and higher, execute the below command.

sudo docker version

Install Kubernetes packages

In order to grab the latest package for Kubernetes, we need to configure our yum repository. Copy and paste the page below line of code to create a new config file for Kubernetes yum repo.

sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF'

Disable SELinux to prevent any communication issues on all the nodes.

sudo setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

After the installation is completed, let’s enable kubelet as a service.

sudo systemctl enable kubelet && sudo systemctl start kubelet

Master node setup

Allow 6443 and 10250 from firewalld on master node

sudo firewall-cmd --permanent --add-port=6443/tcp && sudo firewall-cmd --permanent --add-port=10250/tcp && sudo firewall-cmd --reload

NOTE: If you do not execute the above commands, you will see the below warning during Kubernetes initialization.
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open, or your cluster may not function correctly.
error execution phase preflight: [preflight] Some fatal errors occurred:

Set IPTables settings
Copy and paste the below line of code on your master and worker node.

sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

Apply changes by execute the below command.

sudo sysctl --system

Load br_netfilter module

sudo lsmod | grep br_netfilter

Configure Kubernetes Master node

Now that we are done installing the required packages and configuration of your system. Let’s go ahead and start the configuration.

First we need to get all the images that are going to be used during Kubernetes initialization. It is optional and kubeadm will automatically pull it during initialization. But I recommend it to first get the images, so you don’t have to worry about images.

sudo kubeadm config images pull

After it is pulled all the images let’s get started with cluster setup.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

During initialization, if you received the following error, that means you didn’t disable your swap. Please go ahead and disable swap and reboot your system and try again.

[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

Or, if you received the below error, ensure you have applied the correct IPTable settings as provided above.

[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

After you have successfully set up your Kubernetes cluster, your output should look similar to below. Please make a note of the key for joining worker nodes to Kubernetes cluster.

[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-node localhost] and IPs [192.168.0.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-node localhost] and IPs [192.168.0.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501860 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master-node as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3j2pkk.xk7tnltycyz2xh5n
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.120:6443 --token khm95w.mo0wwenu2o9hglls \
    --discovery-token-ca-cert-hash sha256:aeb0ca593b63c8d674719858fd2397825825cebc552e3c165f00edb9671d6e32

Adding cluster settings to regular users to be able to access Kubernetes cluster locally.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Apply network settings for pods

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

That’s it! You have your master node setup.

Worker node setup

If you have more than one node, apply the below steps to each worker node.

Use the information received from your master node to join the cluster. The below information may be different for you.

sudo kubeadm join 192.168.0.120:6443 --token khm95w.mo0wwenu2o9hglls \
    --discovery-token-ca-cert-hash sha256:aeb0ca593b63c8d674719858fd2397825825cebc552e3c165f00edb9671d6e32

If you receive the following output. That means you have successfully connected to the Kubernetes master node.

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

To check if your node joined the cluster, execute the below command from master node.

kubectl get nodes

Your output should look something like below.

NAME                STATUS     ROLES    AGE     VERSION
k8s-master-node     NotReady   master   37m     v1.14.0
k8s-worker-node-1   NotReady   <none>   8m19s   v1.14.0

Troubleshooting Master node issues

If you are receiving CrashLoopBackOff for coredns as shown below, it is likely your firewalld on worker node is blocking connectivity with the master node.

[rahil@k8s-master-node ~]$ kubectl get pods -A -o wide
NAMESPACE     NAME                                      READY   STATUS             RESTARTS   AGE   IP              NODE                NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-9jd8r                   0/1     CrashLoopBackOff   15         19h   10.244.1.7      k8s-worker-node-1   <none>           <none>
kube-system   coredns-fb8b8dccf-kfjsz                   0/1     CrashLoopBackOff   15         19h   10.244.1.6      k8s-worker-node-1   <none>           <none>

The recommended solution is to stop firewalld completely to resolve this issue. Execute the below command to stop firewalld on your worker nodes.

sudo systemctl disable firewalld && sudo systemctl stop firewalld && sudo systemctl status firewalld
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'

If your master node setup wasn’t successful, and you are seeing the above errors, I recommend starting all over again. I spent several hours and ended up re-imaging my VM and reconfiguring all of it step by step. Good luck!

Optional but recommended setting for Kubernetes dashboard

From master node execute below command to create new deployment for Kubernetes dashboard. You can also reference it from their github site as well for more information.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

Leave a Comment

Scroll to Top