当前位置:网站首页>Cloud native high availability and Disaster Recovery Series (I): pod break up scheduling
Cloud native high availability and Disaster Recovery Series (I): pod break up scheduling
2022-06-24 06:35:00 【imroc】
This article excerpts from kubernetes Learning notes
summary
take Pod Break up the dispatch to different places , It can avoid hardware and software failure 、 Fiber failure 、 The service is unavailable due to power failure or natural disaster , To achieve high availability deployment of services .
Kubernetes Support two ways to Pod Break up the scheduling :
- Pod Anti affinity (Pod Anti-Affinity)
- Pod Topological distribution constraints (Pod Topology Spread Constraints)
This article introduces the usage examples and comparison summary of the two methods .
Use podAntiAffinity
take Pod Force the decentralized scheduling to different nodes ( Strong anti affinity ), To avoid a single point of failure :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginxlabelSelector.matchLabelsReplace with selected Pod Actually used label.topologyKey: One of the nodes label Of key, It can represent the topology domain of the node , It can be used Well-Known Labels, What is commonly used iskubernetes.io/hostname( Node dimension )、topology.kubernetes.io/zone( Availability zone / Computer room dimension ). You can also manually mark the node with a custom label To define the topology domain , such asrack( Rack dimension )、machine( Physical machine dimension )、switch( Switch dimension ).- If you do not want to use force , Weak anti affinity can be used , Give Way Pod Try to schedule to different nodes :podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: topologyKey: kubernetes.io/hostname weight: 100
take Pod Forcibly break up scheduling to different availability zones ( Computer room ), To achieve cross machine room disaster tolerance :
take kubernetes.io/hostname Switch to topology.kubernetes.io/zone, The rest is the same as above .
Use topologySpreadConstraints
take Pod To the greatest extent, the scheduling is evenly distributed to all nodes :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
- matchLabels:
app: nginx
containers:
- name: nginx
image: nginxtopologyKey: And podAntiAffinity Similar configuration in .labelSelector: And podAntiAffinity Similar configuration in , You can select multiple groups here pod Of label.maxSkew: Must be an integer greater than zero , Indicates that it can tolerate... In different topological domains Pod Maximum value of quantity difference . there 1 It means that only the difference is allowed 1 individual Pod.whenUnsatisfiable: Indicates what to do if the condition is not met .DoNotScheduleNo scheduling ( keep Pending), Similar to strong anti affinity ;ScheduleAnywayIndicates that you want to schedule , Similar to weak anti affinity ;
The above configurations are explained together : Will all nginx Of Pod Strictly and uniformly disperse the scheduling to different nodes , On different nodes nginx The maximum number of copies of can only differ by 1 individual , If a node cannot schedule more due to other factors Pod ( For example, insufficient resources ), Then let the rest nginx copy Pending.
therefore , If you want to break up strictly in all nodes , Usually not very desirable , You can add it nodeAffinity, Strictly break up only in some nodes with sufficient resources :
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: io
operator: In
values:
- high
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
- matchLabels:
app: nginxOr similar to weak anti affinity , take Pod Distribute the scheduling to each node as evenly as possible , Don't force (DoNotSchedule Change it to ScheduleAnyway):
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
- matchLabels:
app: nginx If the cluster node supports cross availability , It's fine too take Pod Try to distribute the scheduling to each available area as evenly as possible To achieve a higher level of high availability (topologyKey Change it to topology.kubernetes.io/zone):
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
- matchLabels:
app: nginxFurther , Sure take Pod Try to evenly disperse the scheduling to each available area at the same time , Nodes in the availability zone should also be scattered as much as possible :
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
- matchLabels:
app: nginx
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
- matchLabels:
app: nginxSummary
It is obvious from the example that ,topologySpreadConstraints Than podAntiAffinity More powerful , Provides finer scheduling control , We can understand that topologySpreadConstraints yes podAntiAffinity Upgraded version .topologySpreadConstraints Characteristic in K8S v1.18 Enabled by default , So suggest v1.18 And above topologySpreadConstraints To break up Pod To improve service availability .
Reference material
边栏推荐
- Why computers often crash
- WordPress pill applet build applet from zero to one [pagoda panel installation configuration]
- Kangaroo cloud: the overall architecture and key technical points of building a real-time computing platform based on Flink
- Analysis of official template of wechat personnel recruitment management system (I)
- How to solve the problem that after Tencent cloud sets static DNS, restarting the machine becomes dynamic DNS acquisition
- Use of SAP QM inspection points
- Multi objective Optimization Practice Based on esmm model -- shopping mall
- Spirit information development log (3)
- 基于三维GIS系统的智慧水库管理应用
- Get the short video! Batch download of Kwai video (with source code)
猜你喜欢

云上本地化运营,东非第一大电商平台Kilimall的出海经

ServiceStack. Source code analysis of redis (connection and connection pool)

Technology is a double-edged sword, which needs to be well kept

【二叉数学习】—— 树的介绍

Manual for automatic testing and learning of anti stepping pits, one for each tester
Fault analysis | using --force to batch import data leads to partial data loss

puzzle(019.1)Hook、Gear
Oracle case: ohasd crash on AIX

基于三维GIS系统的智慧水库管理应用

【二叉树】——二叉树中序遍历
随机推荐
Web automation test (3): Selenium basic course of web function automation test
The installation method of apache+mysql+php running environment under Windows
How to apply 5g smart pole to smart highway
How does easyplayer RTSP configure sending heartbeat information to the server?
Nine possibilities of high CPU utilization
Overview of new features in mongodb5.0
Wordpress5.8 is coming, and the updated website is faster!
Comparison of common layout solutions (media query, percentage, REM and vw/vh)
Analysis of official template of micro build low code (I)
Kubernets traifik proxy WS WSS application
How to choose CMS website system for website construction
Basic concepts of complex networks
How to batch move topics to different categories in discover
【二叉数学习】—— 树的介绍
Rhel8 series update image Yum source is Tencent cloud Yum source
Use of SAP QM inspection points
Flexible use of distributed locks to solve the problem of repeated data insertion
Get the short video! Batch download of Kwai video (with source code)
创客教育给教师发展带来的挑战
Royal treasure: an analysis of SQL algebra optimization