当前位置:网站首页>Deep understanding of pod
Deep understanding of pod
2022-07-25 04:35:00 【zakariyaa33】
In depth Pod
ConfigMap
ConfigMap Typical uses for containers are as follows .
(1) Generate environment variables in the container .
(2) Set the start parameters of the container start command ( Need to be set to environment variable ).
(3) With Volume Mounted as a file or directory inside a container .
ConfigMap With one or more key:value Save the form in Kubernetes Supply and use in the system , It can be used to represent the value of a variable ( for example apploglevel=info), It can also be used to represent the contents of a complete configuration file ( for example server.xml=<?xml…>…).
ConfigMap Must be in Pod You created earlier ,Pod To quote him , So static Pod Can't use ConfigMap
Create method
Directly through kubectl create configmap You can also create ConfigMap, You can use parameters –from-file or –from-literal Specify content , And you can specify multiple parameters in a single command line .
1、 for example , There are configuration files in the current directory server.xml, You can create a ConfigMap
# kuubectl creat configmap cm-server.xml --from-file=server.xml
This creates a file named cm-server.xml Of configmap
2、 Suppose that configfiles The directory contains two configuration files server.xml and logging.properties, Create a ConfigMap
# kubectl creat configmap cm-appconf --from-file=configfiles
3、 adopt yaml establish
Usage mode
1、 Use... In the form of environment variables
Set up Pod Of yaml When , stay spec Of containers in , Set up evnFrom
kind: Pod
metadata:
************************
************************
spec:
containers:
-name: ****************
************************
************************
envFrom:
-configMapRef
name: cm-appvars
************************
2、 adopt volume mount
Set up Pod Of yaml When , stay spec Of containers Of volumeMounts Fill in target volume Name and mount directory , stay spec Of volumes Fill in the name and set configMap Parameters
kind: Pod
spec:
containers:
-name: cm-test-app
************************
************************
volumeMounts:
-name: serverxml
mountPath: /configfiles
volumes:
-name: serverxml
configMap:
name: cm-appconfigfiles
Pod Dispatch
in the majority of cases , We hope Deployment Created Pod The replica is successfully dispatched to any available node in the cluster , It doesn't care which node it will be dispatched to .
however , There is indeed a demand in the real production environment : I hope some kind of Pod All copies of run on a specified node or nodes , For example, we hope that MySQL The database is scheduled to a database with SSD On the target node of the disk , here Pod In the template NodeSelector Properties begin to work , Above MySQL The implementation of directional scheduling case can be divided into the following two steps .
(1) To have SSD On disk Node All labeled with custom labels disk=ssd.
(2) stay Pod Set... In the template NodeSelector The value of is “disk:ssd”.
In this way ,Kubernetes On schedule Pod Copy time , Will follow first Node The appropriate target node is filtered out by the tag , Then select an optimal node for scheduling .
The above logic seems simple and perfect , But in a real production environment, you may face the following embarrassing problems .
(1) If NodeSelector Select the Label Not existing or qualified , For example, these target nodes are down or have insufficient resources at this time , What should I do ?
(2) If you want to select a variety of suitable target nodes , such as SSD Node of disk or node of ultra high speed hard disk , What should I do ?
Kubernetes Introduced NodeAffinity( Node affinity settings ) To address this need .
Appoint Node
When we need to Pod Specify scheduling to some special Node When , You can give it first Node Label and pass NodeSelector To match
1、 to Node tagging
kubctl label nodes <node-name> <label-name>=<label-value>
kubctl label nodes k8s-node-1 zone=north
In this way, it is called k8s-node-1 Of Node Marked with the label of the North
And then again Deployed At the time of nodeSelector Just choose from it
NodeAffinity
NodeAffinity Meaning for Node Scheduling strategy of affinity , Is for replacement
NodeSelector New scheduling strategy based on . At present, there are two kinds of node affinity expression .
◎ RequiredDuringSchedulingIgnoredDuringExecution: Must meet the specified rules to schedule Pod To Node On ( Function and nodeSelector It's like , But using a different syntax ), It's equivalent to a hard limit .
◎ PreferredDuringSchedulingIgnoredDuringExecution: Emphasize priority to meet specified rules , The scheduler will try to schedule Pod To Node On , But it doesn't force , It's equivalent to a soft limit . Multiple priority rules can also set weights (weight) value , To define the order of execution .
IgnoredDuringExecution It means : If one Pod The node is located in Pod Tags changed during the run , No longer fit for Pod The node affinity requirements of , Then the system will ignore Node On Label The change of , The Pod Can continue to run on this node .
The following example sets NodeAffinity The scheduling rules are as follows .
◎ requiredDuringSchedulingIgnoredDuringExecution: It is required to run only on amd64 Node (beta.kubernetes.io/arch In amd64).
◎ preferredDuringSchedulingIgnoredDuringExecution: It is required to run on disk type as far as possible ssd(disk-type In ssd) Node . The code is as follows :
*******************
*******************
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
-matchExpression:
-key: beta.kubernetes.io/arch
operator: In
values:
-amd64
preferredDuringSchedulingIgnoredDuringExecution:
-weight:1
preference:
matchExpressions:
-key: disk-type
operator: In
values:
-ssd
*******************
*******************
Priority scheduling
You can set Pod Different priorities , When they preempt, they can decide who runs according to the priority , for example :
A low priority Pod A stay Node A( It belongs to the rack R) Up operation , There is a high priority Pod B Waiting for dispatch , The target node belongs to the same rack R Of Node B, One or all of them define antiaffinity The rules , It is not allowed to run on the same rack , here Scheduler Only “ Lost car bodyguard ”, Expel low priority Pod A To meet high priority Pod B Scheduling requirements for .
Pod An example of priority scheduling is as follows :
First , Created by Cluster Administrator PriorityClass,PriorityClass Does not belong to any namespace
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
Above YAML The file defines a file called high-priority Priority categories for , The priority for 100000, The greater the number , The higher the priority , More than 100 million numbers are retained by the system , Used to assign to system components . We can go anywhere Pod Quote above Pod Priority categories :
****************
kind: Pod
****************
****************
spec:
containers:
-name: nginx
image: nginx
****************
priorityClassName: high-priority
#Pod Upgrade and rollback of
Deployment The upgrade
So let's say we have one Deployment, be known as nginx-deployment.yaml, We need to upgrade nginx Mirror image , Can pass
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Deployment Roll back of
Rollback the previous version Deployment, Let's first query this Deployment History of deployment
kubectl rollout history deployment/nginx-deployment
Output
REVISION CHANGE_CAUSE
1 kubectl creat --filename=nginx-deployment.yaml --record=true
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
Use undo To rollback the previous version
kubectl rollout undo deployment/nginx-deployment
You can also select the version number that needs to be rolled back
kubectl rollout undo deployment/nginx-deployment --to-revision=1
Pod Expansion and contraction of
Manual
adopt scale The order can change Pod Number of copies updated
kubectl scale deployment nginx-deployment --replicas 5
// Update the number of copies to 5 individual
By changing the number to a smaller number than the current , Can shrink
Automatically
Horizontal Pod Autoscaler(HPA) The controller is used to realize based on CPU Automatic usage Pod Function of expansion and contraction .
HPA Controller based on Master Of kube-controller-manager Service startup parameters –horizontal-pod-autoscaler-sync-period Defined detection period ( The default value is 15s), Periodically monitor targets Pod Resource performance index of , And with HPA The expansion and contraction conditions in the resource object are compared , When the conditions are met Pod Adjust the number of copies .
边栏推荐
- Nested if selection structure and switch selection structure
- When developing or debugging the IP direct scheme, it should be noted that the host value should be consistent with the direct IP
- HTC new VR all-in-one machine vive focus plus release: price 5699 yuan!
- LVGL 8.2 Tabview
- ThreadLocal Kills 11 consecutive questions
- C# 之 FileStream类介绍
- Apipost signs up with Chinatelecom! Work together to accelerate the digital transformation of enterprises
- DNS domain name resolution
- Metinfo function public function getcity() error: XXX function no permission load!!!
- Attack and defense world ----- ics-05
猜你喜欢

The application could not be installed: INSTALL_ FAILED_ USER_ RESTRICTED

Serial adder / subtracter

Salt and ice particles cannot be distinguished

DNS domain name resolution

DOM event flow

Penetration test target combat SQL injection getshell

运筹学基础【一】 之 导论

Construction of Seata multilingual system
![[ CTF 学习 ] CTF 中的隐写集合 —— 图片隐写术](/img/32/2da78bd5866cfab9ee64dfcb1c1204.png)
[ CTF 学习 ] CTF 中的隐写集合 —— 图片隐写术

Network engineering case: integrated network design of CII company
随机推荐
6.7 billion dollars! The acquisition of IDT by Renesas Electronics was finally approved!
Method of setting document comments in idea (graphic version)
1. If function of Excel
[internship] processing time
Detailed explanation of security authentication of mongodb
LVGL 8.2 Message box
LVGL 8.2 Slider
C# 之 FileStream类介绍
Nested if selection structure and switch selection structure
LVGL 8.2 Roller
Mit.js: small event publishing and subscription library
[daily question] 731. My schedule II
Open source summer interview | "after 00" PMC member Bai Zeping
Libenent and libev
# 1. Excel的IF函数
01 create project warehouse
很多时候都是概率
The LAF protocol elephant of defi 2.0 may be one of the few profit-making means in your bear market
Today is important
Beijing University of Posts and telecommunications | RIS assisted in-house multi robot communication system joint deep reinforcement learning