当前位置:网站首页>Kubernetes cluster construction of multiple ECS

Kubernetes cluster construction of multiple ECS

2022-06-25 23:12:00 Illusory private school

High quality resource sharing

Learning route guidance ( Click unlock ) Knowledge orientation Crowd positioning
🧡 Python Actual wechat ordering applet 🧡 Progressive class This course is python flask+ Perfect combination of wechat applet , From the deployment of Tencent to the launch of the project , Create a full stack ordering system .
Python Quantitative trading practice beginner Take you hand in hand to create an easy to expand 、 More secure 、 More efficient quantitative trading system

Environmental Science

Two or more Tencent cloud servers ( I built two ), All are CentOs 7.6,

master node : The server is 4C8G, Public network IP:124.222.61.xxx

node1 node : The server is 4C4G, Public network IP:101.43.182.xxx

modify hosts Information :

stay master Nodes and node Node hosts Add node information to the file

$ vim /etc/hosts
124.222.61.xxx master
101.43.182.xxx node1

there master and node1 Are all hostname, Try not to use the default hostname, modify hostname The order is hostnamectl set-hostname master

Disable firewall :

$ systemctl stop firewalld
$ systemctl disable firewalld

Ban SELINUX:

$ vim /etc/selinux/config
SELINUX=1
$ setenforce 0
$ vim /etc/selinux/config
SELINUX=disabled

load br_netfilter modular :

$ modprobe br_netfilter

establish /etc/sysctl.d/k8s.conf file :

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Execute the command to make the modification effective :

$ sysctl -p /etc/sysctl.d/k8s.conf

install ipvs:

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip\_vs
modprobe -- ip\_vs\_rr
modprobe -- ip\_vs\_wrr
modprobe -- ip\_vs\_sh
modprobe -- nf\_conntrack\_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

Install on each node ipset:

$ yum install ipset

Installation management tool ipvsadm:

$ yum install ipvsadm

Synchronize server time :

$ yum install chrony -y
$ systemctl enable chronyd
$ systemctl start chronyd

close swap Partition :

$ swapoff -a
$ vim /etc/sysctl.d/k8s.conf
( Add a row )vm.swappiness=0
$ sysctl -p /etc/sysctl.d/k8s.conf

install Docker:

$ yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
$ yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo ( Alibaba cloud image )
$ yum install docker-ce-18.09.9

To configure Docker Image accelerator ( Alibaba cloud ):

$ mkdir -p /etc/docker
$ vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors" : [
    "https://uvtcantv.mirror.aliyuncs.com"
  ]
}

start-up Docker:

$ systemctl start docker
$ systemctl enable docker

install Kubeadm:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86\_64
enabled=1
gpgcheck=0
repo\_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
 http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Then install kubeadm、kubelet、kubectl:

$ yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes

Set to startup and self startup :

$ systemctl enable --now kubelet

All the above operations need to be configured on all nodes

Cluster initialization

stay master Node configuration kubeadm Initialization file :

$ kubeadm config print init-defaults > kubeadm.yaml

modify kubeadm.yaml file , modify imageRepository ,kube-proxy Model for ipvs,networking.podSubnet Set to 10.244.0.0/16

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 124.222.61.xxx  # apiserver master node IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master  #  Default read current master Node hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  #  Change to Alibaba cloud image source 
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod  Network segment ,flannel The plug-in needs to use this network segment 
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy  Pattern 

Then use the configuration file above to initialize :

$ kubeadm init --config kubeadm.yaml

Initialization there is a pit , After executing the above initialization script, you will get stuck in etcd Initialization location , because etcd Use the external network when Binding ports IP, The ECS extranet IP Not the local network card , It is a gateway assigned for external access IP, This causes the initialization process to keep retrying the binding , Stuck here for a long time [kubelet-check] Initial timeout of 40s passed.

terms of settlement , Start another server terminal when it is stuck , Modify the generated by initialization etcd.yaml

vim /etc/kubernetes/manifests/etcd.yaml

Change it to this :

Just wait patiently for three to four minutes .

After successful initialization , A command will be printed on the terminal , This command is the command to be executed when a node joins the cluster, as shown in the following figure :

Copy kubeconfig file :

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Add a node

take master nodes $HOME/.kube/config File copy to node node $HOME/.kube/config In the corresponding file

Then execute the above master Command generated by node initialization , If you forget, you can execute kubeadm token create --print-join-command Recapture .

kubeadm join 124.222.61.161:6443 --token 1l2un1.or6f04f1rewyf0xq     --discovery-token-ca-cert-hash sha256:1534171b93c693e6c0d7b2ed6c11bb4e2604be6d2af69a5f464ce74950ed4d9d

Run after successful execution kubectl get nodes command :

$ kubectl get nodes
 After execution, you can see  status  yes  NotReady  state , Because we haven't installed the network plug-in yet 

install flannel The network plugin :

$ wget  https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
$ vi kube-flannel.yml
......
containers:
- name: kube-flannel
  image: quay.io/coreos/flannel:v0.11.0-amd64
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=eth0  #  If it is a multi network card , Specify the name of the intranet card 
......
$ kubectl apply -f kube-flannel.yml

Wait a while to see Pod Running state :

$ kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-58cc8c89f4-6nn74              1/1     Running   0          18h
coredns-58cc8c89f4-v96jb              1/1     Running   0          18h
etcd-ydzs-master                      1/1     Running   0          18h
kube-apiserver-ydzs-master            1/1     Running   2          18h
kube-controller-manager-ydzs-master   1/1     Running   0          18h
kube-flannel-ds-amd64-674zs           1/1     Running   0          18h
kube-flannel-ds-amd64-zbv7l           1/1     Running   0          18h
kube-proxy-b7c9c                      1/1     Running   0          18h
kube-proxy-bvsrr                      1/1     Running   0          18h
kube-scheduler-ydzs-master            1/1     Running   0          18h

see node node , It's normal to find :

$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   18h   v1.16.2
node1    Ready     18h v1.16.2

To configure Dashboard

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
$ vi recommended.yaml
#  modify Service by NodePort type 
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  #  add type=NodePort become NodePort Type of service 
......
$ kubectl apply -f recommended.yaml


Dashboard Will be installed by default in kubernetes-dashboard Under this namespace :

$ kubectl get pods -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-6b86b44f87-xsqft   1/1     Running   0          16h
$ kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.100.126.111    8000/TCP 17h
kubernetes-dashboard NodePort 10.108.217.144  443:31317/TCP 17h

And then we passed https://124.222.61.161:31317 visit , You will find that the access failed , Because the certificate has expired, let's generate the certificate :

# new directory :
mkdir key && cd key

# Generate Certificate 
openssl genrsa -out dashboard.key 2048 

# I wrote my own node1 node , Because I'm through nodeport Access to the ; If you pass apiserver visit , You can write your own master node ip
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=124.222.61.161'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt 

# Delete the original certificate secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard

# Create a new certificate secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

# see pod
kubectl get pod -n kubernetes-dashboard

# restart pod
kubectl delete pod kubernetes-dashboard-7b5bf5d559-gn4ls  -n kubernetes-dashboard

If you continue to access after execution, you will be prompted that the connection is not secure , Just keep visiting .

Here we use Firefox ,Google Browser not accessible

Create user login Dashboard:

#  establish  admin.yaml  file 
$ vim admin.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard

#  Create directly 
$ kubectl apply -f admin.yaml
$ kubectl get secret -n kubernetes-dashboard|grep admin-token
admin-token-jv2dq                  kubernetes.io/service-account-token   3      16h
kubectl get secret admin-token-jv2dq -o jsonpath={.data.token} -n kubernetes-dashboard |base64 -d
#  Will generate a long string of base64 String after 

And then use the above base64 As a string of token Sign in Dashboard that will do :

原网站

版权声明
本文为[Illusory private school]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206251942006304.html