当前位置:网站首页>Kubernetes theoretical basis

Kubernetes theoretical basis

2022-06-28 07:37:00 Courageous steak

1 Introduce

Highly available cluster replica data is best >= 3 An odd number

2 Component is introduced

k8s framework
 Insert picture description here

2.1 Core components

2.1.1 api server

All services access a unified portal

2.1.2 ControllerManager

Expected number of copies maintained

2.1.3 Scheduler

Be responsible for introducing tasks , Select the appropriate node to assign tasks

2.1.4 etcd

Key-value pair database , Storage K8S All important information about the cluster ( Persistence )

  • ETCD
    etcd It is officially positioned as a trusted distributed key value storage service , It can store some key service data for the whole distributed cluster , Assist in the normal operation of distributed clusters .

Architecture diagram
 Insert picture description here
AWL: journal
Store: Persistent write to local disk

2.1.5 Kubelet

Directly interact with the container to realize the life cycle management of the container

2.1.6 Kube-proxy

Responsible for writing rules to IPTABELS、IPVS, Implementation of service mapping access

2.2. Other plug-ins

2.2.1 CoreDNS

It can be used for SVC Create a domain name 、IP Correspondence analysis

2.2.2 dashboard

to K8S Cluster provides a B/S Structure access architecture

2.2.3 Ingress Controller

The government can only achieve four levels of agency ,Ingress Can achieve 7 layer

2.2.4 fedetation

Provide a cluster center that can span multiple K8S Unified management function

2.2.5 prometheus

Provide a K8S Cluster monitoring capability

2.2.6 ELK

Provide K8S Cluster log unified analysis intervention platform

3 Pod

3.1 Pod Concept

  • Autonomous Pod
  • Controller managed Pod

notes : It's not official to install the above classification

3.1.1 Pod Service type

HPA
Horizontal Pod Autoscaling Only applicable to Deployment and ReplicaSet, stay V1 Only according to Pod Of CPU Utilization expansion and reduction , stay vlalpha In the version , Support for memory and user-defined metric Expansion and contraction capacity .

StatefulSet
To solve the problem of stateful service ( Corresponding Deployments and ReplicaSets It is assumed for stateless service ), Its scenarios include :

  • Stable persistent storage : namely Pod You can still access the same persistent data after rescheduling , be based on PVC To achieve .
  • Stable network logo , namely Pod After rescheduling PodName and HostName unchanged , be based on Headless Service( That is, no Cluster Ip Of Service) To achieve .
  • Orderly deployment , The orderly expansion , namely Pod There is a sequence , When deploying or expanding, it should be done in the order of definition ( From 0 To N-1, All before the next run Pod Must be Running and Read state ), be based on init containers To achieve
  • Orderly contraction , In order to delete ( From N-1 To 0)

DaemonSet
Make sure that all ( Or some )Node Run a Pod Copy of . When there is Node When joining a cluster , It will also add a new one for them Pod. When there is Node When removing from a cluster , these Pod It will also be recycled . Delete DaemonSet It will delete all the Pod

Use DaemonSet Some typical uses of

  • Running cluster storage daemon, For example, in each Node Up operation glusterd、ceph
  • At every Node Running log on mobile phone daemon, for example fluentd、logstash
  • At every Node On run monitoring daemon, for example Prometheus Node Exporter

Job
Responsible for batch processing tasks , That is to perform a task , It guarantees one or more of the batch tasks Pod A successful ending

Cron Job
Manage time-based Jod, namely

  • Run only once at a given point in time
  • Run periodically at a given point in time

3.1.2 Pod Service discovery

 Insert picture description here

4 Network communication mode

Kubernetes The network model assumes that all Pod All in a flat network space that can be directly connected , This is in GCE(Google Comute Engine) It's a ready-made network model ,Kubernetes Suppose the network already exists . And build in the private cloud Kubernetes colony , You can't assume that this network already exists . We need to implement this network hypothesis ourselves , Put... On different nodes Docker The mutual access between containers should be made first , And then run Kubernetes.

  • The same Pod Between multiple containers in :lo
  • various Pod Communication between :Overlay Network
  • Pod And Service Communication between : Of each node Iptables The rules

Flannel yes CoreOS Team directed Kubernetes Design of a network planning service , Simply speaking , Its function is to let different node hosts in the cluster create Docker Containers all have the unique virtual of the whole cluster IP Address . And it can be in these IP Build an overlay network between addresses (Overlay Network), Through this overlay network , Transfer the packet to the target container intact .

etcd And Flannel Provide instructions

  • Storage management Flannel Distributable IP Address segment resources
  • monitor etcd Each of them Pod The actual address of , And build maintenance in memory Pod Node routing table

 Insert picture description here

Study address :
https://www.bilibili.com/video/BV1w4411y7Go

原网站

版权声明
本文为[Courageous steak]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/179/202206280736116297.html