Building Kubernetes Cluster on CentOS7
* OS:
Centos 7.6 (1810)
* Nodes:
master:
node1: 192.168.1.1
workers:
node2: 192.168.1.2
node2: 192.168.1.3
* Pre-configuration:
- firewalld: disabled
- selinux: disabled
- dns or /etc/hosts: configured
* Steps
### Ref:
https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/1. update packages:
yum update -y
reboot
2. enable br_netfilter:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
3. turn off swap:
swapoff -a
sed -i '/^[^ ]\+ \+swap \+/s/^/#/' /etc/fstab
4. install docker-ce:
yum-config-manager --add-repo
https://download.docker.com/linux/centos/docker-ce.repoyum install -y docker-ce
5. install kubernetes:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOF
### Notes: gpgcheck=1 may not work while installing kubeadm, otherwise keep it to 1 if it works.
yum install -y kubelet kubeadm kubectl
6. reboot OS:
reboot
7. start and enable services:
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
8. fix cgroupfs issue:
kadm_conf=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
grep -q 'KUBELET_KUBECONFIG_ARGS=.* --cgroup-driver=cgroupfs"' $kadm_conf || sed -i '/KUBELET_KUBECONFIG_ARGS=/s/"$/ --cgroup-driver=cgroupfs"/' $kadm_conf
systemctl daemon-reload
systemctl restart kubelet
### Note: run above steps on all nodes (both master and workers)
9. enable master (Run on node1 only):
kubeadm init --apiserver-advertise-address=192.168.1.1 --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml### Nodes: copy the 'kubeadm join 192.168.1.1:6443 --token XXXXXXXXXXX' line from the outputs and save it to a text file
10. verify the master nodes:
kubectl get nodes
kubectl get pods --all-namespaces
11. enable workers (Run on node2 & node3 only):
### paste the command line which was copied from master:
kubeadm join 192.168.1.1:6443 --token XXXXXXXXXXX...
12. verify nodes (Run on node1):
### you may have some thing like below:
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 12m v1.13.1
node2.example.com Ready <none> 4m9s v1.13.1
node3.example.com Ready <none> 35m v1.13.1
* Dashboard:
### Ref:
###
https://github.com/kubernetes/dashboard###
https://github.com/kubernetes/dashboard/wiki/Creating-sample-user1. create dashbord:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yamlkubectl proxy
2. create admin user:
cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
EOF
cat > rolebinding.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
kubectl apply -f dashboard-adminuser.yaml
kubectl apply -f rolebinding.yaml
3. view and copy login token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | awk '/^token:/{print $2}'
4. access dashboard:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/### select 'Token' and paste the the login token
--- end ---