当前位置:网站首页>How to configure networkpolicy for nodeport in kubernetes

How to configure networkpolicy for nodeport in kubernetes

2022-06-24 08:27:00 Chenshaowen

1. Demand background

Pictured above , The business side needs to be isolated namespae Service for , prohibit bar Load access to space , And allow users to Load Balancer (LB) adopt NodePort Access the service . It is easy to write a network strategy :

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: foo spec: podSelector: matchLabels: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.2.3.4/32 - namespaceSelector: matchExpressions: - key: region operator: NotIn values: - bar

However, from LB Access to traffic is completely forbidden , Fall short of expectations . The answer retrieved in the technology community may be ,Kubernetes NetworkPolicy It mainly aims at the access policy in the cluster , And the external flow goes through SNAT after ,IP The policy cannot be hit when changes occur .

Different network plug-ins , Use different modes , The configuration will vary . This article only provides an idea , With the common Calico IPIP Mode as an example NodePort Traffic access policy .

2. Prepare knowledge points

2.1 Kubernetes Medium NetworkPolicy

In the document Kubernetes Network isolation ( More than ten usage scenarios are included ) in , I am right. Kubernetes Of NetworkPolicy There is a description of , Many examples are given .

NetworkPolicy yes Kubernetes Network isolation objects in , Used to describe network isolation policy , The concrete implementation depends on the network plug-in . at present ,Calico、Cilium、Weave Net And other network plug-ins support network isolation .

2.2 Calico Several working modes of

  • BGP Pattern

stay BGP In mode , In the cluster BGP The client is interconnected in pairs , Synchronize routing information .

  • Route Reflector Pattern

stay BGP In mode , The number of client connections has reached N * (N - 1),N Indicates the number of nodes . This approach limits the size of the nodes , Community recommendations do not exceed 100 Nodes .

Route Reflector In mode ,BGP The client does not need to synchronize the routing information , Instead, the routing information is synchronized to a number of designated Route Reflector . All BGP Clients only need and Route Reflector Just connect , The number of connections is linearly related to the number of nodes .

  • IPIP Pattern

differ BGP Pattern ,IPIP The pattern is through tunl0 Tunnel between nodes , Network connectivity . The following figure describes IPIP In mode Pod Flow between .

3. Why the network policy doesn't work

In the previous document Kubernetes How to get the client real IP in , I described externalTrafficPolicy Impact on service traffic .

Cluster In mode , If you visit node-2:nodeport, Traffic will be forwarded to the existing service Pod The node of node-1 On .

Local In mode , If you visit node-2:nodeport, Traffic will not be forwarded , Unable to respond to a request .

Usually we default to Cluster Pattern , and Cluster Mode will be used when forwarding traffic SNAT, That is to change the source address . This will cause the access request to fail to hit the network policy , Mistakenly thinking that the network policy is not effective .

Here are two solutions :

  1. take SNAT The following source address is also added to the access white list
  2. Use Local Pattern . because LB It has the function of detecting activity , The ability to forward traffic to services with Pod Node , This preserves the source address .

4. NodePort Under the NetworkPolicy To configure

4.1 Test environment

  • Kubernetes edition

v1.19.8

  • kube-proxy Forwarding mode

IPVS

  • Node information

1 2 3 4 5 6

kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node1 Ready master,worker 34d v1.19.8 10.102.123.117 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.6 node2 Ready worker 34d v1.19.8 10.102.123.104 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.6 node3 Ready worker 34d v1.19.8 10.102.123.143 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.6

  • Load tested

1 2 3 4

kubectl -n tekton-pipelines get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tekton-dashboard-75c65d785b-xbgk6 1/1 Running 0 14h 10.233.96.32 node2 <none> <none>

The load runs at node2 Node

  • Testing services

1 2 3 4

kubectl -n tekton-pipelines get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE tekton-dashboard NodePort 10.233.5.155 <none> 9097:31602/TCP 10m

4.2 NodePort How traffic is forwarded to Pod

Here we mainly consider two cases .

  1. Access doesn't exist Pod Load nodes node1
  • Service forwarding rules
ipvsadm  -L

TCP  node1:31602 rr
  -> 10.233.96.32:9097            Masq    1      0          0

TCP  node1:31602 rr
  -> 10.233.96.32:9097            Masq    1      0          0

TCP  node1.cluster.local:31602 rr
  -> 10.233.96.32:9097            Masq    1      0          0

TCP  node1:31602 rr
  -> 10.233.96.32:9097            Masq    1      0          0

TCP  localhost:31602 rr
  -> 10.233.96.32:9097            Masq    1      0          0

You can see the visit node1:31602 Traffic was forwarded to 10.233.96.32:9097, That is service Pod Of IP Address and port .

  • IP Routing and forwarding rules

Next, let's look at the routing and forwarding rules ,10.233.96.0/24 Access to the network segment will be transferred to tunl0, Through the tunnel to node2 Then go to the service .

1 2 3 4 5 6

route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.233.92.0 node3.cluster.l 255.255.255.0 UG 0 0 0 tunl0 10.233.96.0 node2.cluster.l 255.255.255.0 UG 0 0 0 tunl0

  1. Access exists Pod Load nodes node2
  • Service forwarding rules

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

ipvsadm -L TCP node2:31602 rr -> 10.233.96.32:9097 Masq 1 0 0 TCP node2:31602 rr -> 10.233.96.32:9097 Masq 1 0 0 TCP node2.cluster.local:31602 rr -> 10.233.96.32:9097 Masq 1 0 1 TCP node2:31602 rr -> 10.233.96.32:9097 Masq 1 0 0 TCP localhost:31602 rr -> 10.233.96.32:9097 Masq 1 0 0

And node1 equally , visit node2 Upper NodePort The service will also be forwarded to the service Pod Of IP Address and port .

  • Routing and forwarding rules

But the routing rules are different , The destination address is 10.233.96.32 Your bag will be sent to cali73daeaf4b12 . and cali73daeaf4b12 And Pod The network cards in form a group veth pair, Traffic will be sent directly to the service Pod in .

1 2 3 4 5 6 7

route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.233.90.0 node1.cluster.l 255.255.255.0 UG 0 0 0 tunl0 10.233.92.0 node3.cluster.l 255.255.255.0 UG 0 0 0 tunl0 10.233.96.32 0.0.0.0 255.255.255.255 UH 0 0 0 cali73daeaf4b12

From the above command return, you can know , If the access does not exist Pod Load nodes , The flow goes through tunl0 forward ; If access exists Pod Load nodes , The flow does not pass through tunl0 Directly routed to Pod in .

4.3 Scheme 1 , take tunl0 Add to the network policy white list

  • View the tunl0 Information

node1

1 2 3 4

ifconfig tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440 inet 10.233.90.0 netmask 255.255.255.255

node2

1 2 3 4

ifconfig tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440 inet 10.233.96.0 netmask 255.255.255.255

node3

1 2 3 4

ifconfig tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440 inet 10.233.92.0 netmask 255.255.255.255

  • Network policy configuration

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: foo spec: podSelector: matchLabels: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.2.3.4/32 - ipBlock: cidr: 10.233.90.0/32 - ipBlock: cidr: 10.233.96.0/32 - ipBlock: cidr: 10.233.92.0/32 - namespaceSelector: matchExpressions: - key: region operator: NotIn values: - bar

  • Test verification

Fall short of expectations . All through tunl0 All traffic will be allowed .bar Namespace payload can be accessed through node1:31602、node3:31602、tekton-dashboard.tekton-pipelines.svc:9097( Not node2 Load on ) Access the service , The flow cannot be restricted .

4.4 Option two , Use Local Pattern

  • modify svc Of externalTrafficPolicy by Local Pattern

1 2 3 4 5 6 7 8 9 10 11

kubectl -n tekton-pipelines get svc tekton-dashboard -o yaml apiVersion: v1 kind: Service metadata: name: tekton-dashboard namespace: tekton-pipelines spec: clusterIP: 10.233.5.155 externalTrafficPolicy: Local ...

  • Reject all inlet flows

1 2 3 4 5 6 7 8 9

kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test-network-policy-deny-all namespace: foo spec: podSelector: matchLabels: {} ingress: []

  • Add access white list

1 2 3 4 5 6 7 8 9 10 11 12 13 14

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: foo spec: podSelector: matchLabels: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.2.3.4/32

  • Test verification

In line with expectations . Use the above network policy , Can meet business needs , shielding bar Namespace access , Allow external passage LB Forwarding to NodePort The interview of .

5. summary

Network is Kuberntes The relatively difficult part of , However, the network has a large impact on the business 、 One aspect with far-reaching influence . therefore , Spend more time on the Internet , It is necessary and worthwhile .

This paper mainly combines the business requirements , Yes Calico The network mode of , It's solved because SNAT Cause the source IP change , Final NetworkPolicy Problems that don't meet expectations .

stay Calico Of IPIP In mode , in the light of NodePort The access policy for requires the use of externalTrafficPolicy: Local Traffic forwarding mode . Combined with network policy best practices , Disable all traffic first and then , Add white list policy .

6. Reference resources

原网站

版权声明
本文为[Chenshaowen]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/06/20210626141425210m.html