当前位置:网站首页>Cloud native high availability and Disaster Recovery Series (I): pod break up scheduling

Cloud native high availability and Disaster Recovery Series (I): pod break up scheduling

2022-06-24 06:35:00 imroc

This article excerpts from kubernetes Learning notes

summary

take Pod Break up the dispatch to different places , It can avoid hardware and software failure 、 Fiber failure 、 The service is unavailable due to power failure or natural disaster , To achieve high availability deployment of services .

Kubernetes Support two ways to Pod Break up the scheduling :

  • Pod Anti affinity (Pod Anti-Affinity)
  • Pod Topological distribution constraints (Pod Topology Spread Constraints)

This article introduces the usage examples and comparison summary of the two methods .

Use podAntiAffinity

take Pod Force the decentralized scheduling to different nodes ( Strong anti affinity ), To avoid a single point of failure :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: kubernetes.io/hostname
            labelSelector:
              matchLabels:
                app: nginx
      containers:
      - name: nginx
        image: nginx
  • labelSelector.matchLabels Replace with selected Pod Actually used label.
  • topologyKey: One of the nodes label Of key, It can represent the topology domain of the node , It can be used Well-Known Labels, What is commonly used is kubernetes.io/hostname ( Node dimension )、topology.kubernetes.io/zone ( Availability zone / Computer room dimension ). You can also manually mark the node with a custom label To define the topology domain , such as rack ( Rack dimension )、machine ( Physical machine dimension )、switch ( Switch dimension ).
  • If you do not want to use force , Weak anti affinity can be used , Give Way Pod Try to schedule to different nodes :podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: topologyKey: kubernetes.io/hostname weight: 100

take Pod Forcibly break up scheduling to different availability zones ( Computer room ), To achieve cross machine room disaster tolerance :

take kubernetes.io/hostname Switch to topology.kubernetes.io/zone, The rest is the same as above .

Use topologySpreadConstraints

take Pod To the greatest extent, the scheduling is evenly distributed to all nodes :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
        - matchLabels:
            app: nginx
      containers:
      - name: nginx
        image: nginx
  • topologyKey: And podAntiAffinity Similar configuration in .
  • labelSelector: And podAntiAffinity Similar configuration in , You can select multiple groups here pod Of label.
  • maxSkew: Must be an integer greater than zero , Indicates that it can tolerate... In different topological domains Pod Maximum value of quantity difference . there 1 It means that only the difference is allowed 1 individual Pod.
  • whenUnsatisfiable: Indicates what to do if the condition is not met .DoNotSchedule No scheduling ( keep Pending), Similar to strong anti affinity ;ScheduleAnyway Indicates that you want to schedule , Similar to weak anti affinity ;

The above configurations are explained together : Will all nginx Of Pod Strictly and uniformly disperse the scheduling to different nodes , On different nodes nginx The maximum number of copies of can only differ by 1 individual , If a node cannot schedule more due to other factors Pod ( For example, insufficient resources ), Then let the rest nginx copy Pending.

therefore , If you want to break up strictly in all nodes , Usually not very desirable , You can add it nodeAffinity, Strictly break up only in some nodes with sufficient resources :

    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: io
                operator: In
                values:
                - high
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
        - matchLabels:
            app: nginx

Or similar to weak anti affinity , take Pod Distribute the scheduling to each node as evenly as possible , Don't force (DoNotSchedule Change it to ScheduleAnyway):

    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
        - matchLabels:
            app: nginx

If the cluster node supports cross availability , It's fine too take Pod Try to distribute the scheduling to each available area as evenly as possible To achieve a higher level of high availability (topologyKey Change it to topology.kubernetes.io/zone):

    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
        - matchLabels:
            app: nginx

Further , Sure take Pod Try to evenly disperse the scheduling to each available area at the same time , Nodes in the availability zone should also be scattered as much as possible :

    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
        - matchLabels:
            app: nginx
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
        - matchLabels:
            app: nginx

Summary

It is obvious from the example that ,topologySpreadConstraints Than podAntiAffinity More powerful , Provides finer scheduling control , We can understand that topologySpreadConstraints yes podAntiAffinity Upgraded version .topologySpreadConstraints Characteristic in K8S v1.18 Enabled by default , So suggest v1.18 And above topologySpreadConstraints To break up Pod To improve service availability .

Reference material

原网站

版权声明
本文为[imroc]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/07/20210714112745100y.html