作者 主題: Building Kubernetes Cluster on CentOS7  (閱讀 19192 次)

0 會員 與 1 訪客 正在閱讀本文。

netman

  • 管理員
  • 俺是博士!
  • *****
  • 文章數: 17484
    • 檢視個人資料
    • http://www.study-area.org
Building Kubernetes Cluster on CentOS7
« 於: 2019-01-15 21:22 »
Building Kubernetes Cluster on CentOS7

* OS:
Centos 7.6 (1810)

* Nodes:
  master:
    node1: 192.168.1.1
  workers:
    node2: 192.168.1.2
    node2: 192.168.1.3

* Pre-configuration:
  - firewalld: disabled
  - selinux: disabled
  - dns or /etc/hosts: configured

* Steps
### Ref: https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/

1. update packages:
yum update -y
reboot

2. enable br_netfilter:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf

3. turn off swap:
swapoff -a
sed -i '/^[^ ]\+ \+swap \+/s/^/#/' /etc/fstab

4. install docker-ce:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce

5. install kubernetes:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
### Notes: gpgcheck=1 may not work while installing kubeadm, otherwise keep it to 1 if it works.
yum install -y kubelet kubeadm kubectl

6. reboot OS:
reboot

7. start and enable services:
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

8. fix cgroupfs issue:
kadm_conf=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
grep -q 'KUBELET_KUBECONFIG_ARGS=.* --cgroup-driver=cgroupfs"' $kadm_conf || sed -i '/KUBELET_KUBECONFIG_ARGS=/s/"$/ --cgroup-driver=cgroupfs"/' $kadm_conf
systemctl daemon-reload
systemctl restart kubelet

### Note: run above steps on all nodes (both master and workers)

9. enable master (Run on node1 only):
kubeadm init --apiserver-advertise-address=192.168.1.1 --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
### Nodes: copy the 'kubeadm join 192.168.1.1:6443 --token XXXXXXXXXXX' line from the outputs and save it to a text file

10. verify the master nodes:
kubectl get nodes
kubectl get pods --all-namespaces

11. enable workers (Run on node2 & node3 only):
### paste the command line which was copied from master:
kubeadm join 192.168.1.1:6443 --token XXXXXXXXXXX...

12. verify nodes (Run on node1):
### you may have some thing like below:
[root@node1 ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE    VERSION
node1.example.com   Ready    master   12m    v1.13.1
node2.example.com   Ready    <none>   4m9s   v1.13.1
node3.example.com   Ready    <none>   35m    v1.13.1


* Dashboard:
### Ref:
###     https://github.com/kubernetes/dashboard
###     https://github.com/kubernetes/dashboard/wiki/Creating-sample-user

1. create dashbord:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy

2. create admin user:
cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF
cat > rolebinding.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF
kubectl apply -f dashboard-adminuser.yaml
kubectl apply -f rolebinding.yaml

3. view and copy login token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | awk '/^token:/{print $2}'

4. access dashboard:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
### select 'Token' and paste the the login token

--- end ---
« 上次編輯: 2019-01-21 00:02 由 netman »

netman

  • 管理員
  • 俺是博士!
  • *****
  • 文章數: 17484
    • 檢視個人資料
    • http://www.study-area.org
Re: Building Kubernetes Cluster on CentOS7
« 回覆 #1 於: 2019-01-29 16:27 »
Switching Cluster Using Context

Ref:
https://stackoverflow.com/questions/42170380/how-to-add-users-to-kubernetes-kubectl
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
https://github.com/kubernetes/dashboard/wiki/Creating-sample-user

Steps:

===== on master =====

1. create service acount:
代碼: [選擇]
cat > kenny_sa.yaml << END
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kenny
  namespace: kube-system
END
cat > kenny_clusterrole.yaml << END
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kenny
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'
END
cat > kenny_rolebinding.yaml << END
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kenny
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kenny
subjects:
- kind: ServiceAccount
  name: kenny
  namespace: kube-system
END
kubectl create -f  kenny_sa.yaml
kubectl create -f kenny_clusterrole.yaml
kubectl create -f kenny_rolebinding.yaml

2. achieve and transfer access environment
代碼: [選擇]
u_name=$(kubectl -n kube-system get sa kenny -o jsonpath='{.secrets[].name}')
kubectl -n kube-system get secrets $u_name -o jsonpath='{.data.ca\.crt}' | base64 -d > kenny_ca.crt
kubectl -n kube-system get secrets $u_name -o jsonpath='{.data.token}' | base64 -d > kenny_token
name=$(kubectl config get-contexts $(kubectl config current-context) | tail -n 1 | awk '{print $3}')
kubectl config view -o jsonpath="{.clusters[?(@.name == \"$name\")].cluster.server}" > endpoint
scp scp kenny_token kenny_ca.crt endpoint <console_node>:

======== on console node ========
1. set env
代碼: [選擇]
endpoint=$(cat endpoint)
user_token=$(cat kenny_token)

2. set  context
代碼: [選擇]
kubectl config set-credentials kenny --token=$user_token
kubectl config set-context kenny-testlab --cluster=testlab --user=kenny --namespace=default
kubectl config use-context kenny-testlab

3. use context
代碼: [選擇]
kubectl config use-context kenny-testlab
kubectl config view
kubectl get nodes

Summary:
* on master:
  - create ServiceAccount, ClusterRole, RoleBinding
  - achieve and transfer env to console node: ca.crt, token, endpoint
* on console node:
  - set up cluster, credentials, context
  - run use-context