当前位置:网站首页>DPVS fullnat mode deployment

DPVS fullnat mode deployment

2022-06-26 00:39:00 tinychen777

This article mainly introduces in CentOS7.9 Deploy on the system DPVS Of FullNAT Mode and in RealServer Installation on toa The module obtains the real information of the client IP.

Previous articles have introduced DPVS Introduction and deployment as well as DPDK stay DPVS Application and principle analysis of , Students in need can fill in the relevant content first . Because the deployment steps in the previous article only cover DPVS Deployment of , The configuration of various load balancing modes is not involved , And more than half a year later ,DPVS Version and corresponding DPDK The versions have been updated , So here is a new deployment tutorial written in detail .

Installed in this article DPVS Version is 1.8-10,dpdk Version is 18.11.2, Different from the above , There are also differences in installation procedures and operations .

1、 preparation

After the installation, we need to adjust the hardware parameters of the machine ,DPVS The official has certain requirements for hardware ( It is mainly because the underlying layer uses DPDK),dpdk The official gave a Support List , Although the platforms on the support list are widely supported , But in fact, the best compatibility and performance seems to be Intel Hardware platform .

1.1 hardware component

1.1.1 Hardware parameters

  • Machine model : PowerEdge R630
  • CPU: Two pieces Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
  • Memory :16G*8 DDR4-2400 MT/s(Configured in 2133 MT/s), Every CPU64G, total 128G
  • network card 1:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
  • network card 2:Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)
  • System :CentOS Linux release 7.9.2009 (Core)
  • kernel :3.10.0-1160.36.2.el7.x86_64

1.1.2 BIOS Set up

Before the start , Enter the first BIOS in Turn off hyper threading and enable NUMA Strategy . among DPVS It's very typical CPU Busy application ( Where the process is CPU The usage rate has always been 100%), To guarantee performance , Proposed closure CPU Hyper threading settings for . Also because DPVS It uses the large page memory that we manually allocate , In order to ensure CPU Affinity , Best in BIOS Open directly in NUMA Strategy .

1.1.3 network card PCI ID

Use dpvs Of PMD drive After taking over the network card , If the number of network cards is large , Easy to confuse , It is best to record the corresponding in advance adapter name MAC Address and PCI ID, Avoid confusion in later operations .

Use lspci Command to view the corresponding network card PCI ID, Then we can see /sys/class/net/ This directory corresponds to... Under the network card name directory device file , You can know the corresponding network card PCI ID. In the end adapter name -MAC Address -PCI ID Three parameters are concatenated .

$ lspci | grep -i net
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

$ file /sys/class/net/eth0/device
/sys/class/net/eth0/device: symbolic link to `../../../0000:01:00.0'

1.2 Software part

1.2.1 Systems software

#  Compilation and installation dpvs Tools to use and view CPU NUMA Tools for information 
$ yum group install "Development Tools" 
$ yum install patch libnuma* numactl numactl-devel kernel-devel openssl* popt* libpcap-devel -y
#  if necessary ipvsadm Support ipv6 Need to install libnl3-devel
$ yum install libnl libnl-devel libnl3 libnl3-devel -y


#  Be careful kernel And corresponding kernel The version of the component needs to be the same as that used today kernel Version corresponding 
$ uname -r
3.10.0-1160.36.2.el7.x86_64
$ rpm -qa | grep kernel | grep "3.10.0-1160.36.2"
kernel-3.10.0-1160.36.2.el7.x86_64
kernel-devel-3.10.0-1160.36.2.el7.x86_64
kernel-tools-libs-3.10.0-1160.36.2.el7.x86_64
kernel-debug-devel-3.10.0-1160.36.2.el7.x86_64
kernel-tools-3.10.0-1160.36.2.el7.x86_64
kernel-headers-3.10.0-1160.36.2.el7.x86_64

1.2.2 dpvs and dpdk

# dpvs We use it directly git from github Pull the latest version 
$ git clone https://github.com/iqiyi/dpvs.git
# dpdk We download... From the official website 18.11.2 edition , Put it in dpvs The directory is easy to operate 
$ cd dpvs/
$ wget https://fast.dpdk.org/rel/dpdk-18.11.2.tar.xz
$ tar -Jxvf dpdk-18.11.2.tar.xz

After completing the above steps, you can start the following installation .

2、 Installation steps

2.1 DPDK install

2.1.1 install dpdk-patch

stay dpvs The folder patch There are supported under the directory dpdk Version of patch Patch , If you don't know which patch you need , The official recommendation is to install it all

$ ll dpvs/patch/dpdk-stable-18.11.2
total 44
-rw-r--r-- 1 root root  4185 Jul 22 12:47 0001-kni-use-netlink-event-for-multicast-driver-part.patch
-rw-r--r-- 1 root root  1771 Jul 22 12:47 0002-net-support-variable-IP-header-len-for-checksum-API.patch
-rw-r--r-- 1 root root  1130 Jul 22 12:47 0003-driver-kni-enable-flow_item-type-comparsion-in-flow_.patch
-rw-r--r-- 1 root root  1706 Jul 22 12:47 0004-rm-rte_experimental-attribute-of-rte_memseg_walk.patch
-rw-r--r-- 1 root root 16538 Jul 22 12:47 0005-enable-pdump-and-change-dpdk-pdump-tool-for-dpvs.patch
-rw-r--r-- 1 root root  2189 Jul 22 12:47 0006-enable-dpdk-eal-memory-debug.patch

install patch The operation of is also very simple

#  Let's first put all patch Copied to the dpdk Under the root of 
$ cp dpvs/patch/dpdk-stable-18.11.2/*patch dpvs/dpdk-stable-18.11.2/
$ cd dpvs/dpdk-stable-18.11.2/
#  Then we follow patch File names of are installed in sequence 
$ patch -p 1 < 0001-kni-use-netlink-event-for-multicast-driver-part.patch
patching file kernel/linux/kni/kni_net.c
$ patch -p 1 < 0002-net-support-variable-IP-header-len-for-checksum-API.patch
patching file lib/librte_net/rte_ip.h
$ patch -p 1 < 0003-driver-kni-enable-flow_item-type-comparsion-in-flow_.patch
patching file drivers/net/mlx5/mlx5_flow.c
$ patch -p 1 < 0004-rm-rte_experimental-attribute-of-rte_memseg_walk.patch
patching file lib/librte_eal/common/eal_common_memory.c
Hunk #1 succeeded at 606 (offset 5 lines).
patching file lib/librte_eal/common/include/rte_memory.h
$ patch -p 1 < 0005-enable-pdump-and-change-dpdk-pdump-tool-for-dpvs.patch
patching file app/pdump/main.c
patching file config/common_base
patching file lib/librte_pdump/rte_pdump.c
patching file lib/librte_pdump/rte_pdump.h
$ patch -p 1 < 0006-enable-dpdk-eal-memory-debug.patch
patching file config/common_base
patching file lib/librte_eal/common/include/rte_malloc.h
patching file lib/librte_eal/common/rte_malloc.c

2.1.2 dpdk Compilation and installation

$ cd dpvs/dpdk-stable-18.11.2
$ make config T=x86_64-native-linuxapp-gcc
$ make 

#  appear Build complete [x86_64-native-linuxapp-gcc] The words "yes" indicate make success 

$ export RTE_SDK=$PWD
$ export RTE_TARGET=build

There will be no previous use during the compilation and installation process dpdk17.11.2 Version of ndo_change_mtu problem

2.1.3 To configure hugepage

Different from other general procedures ,dpvs The use of dpdk Not asking for memory from the operating system , Instead, use large page memory directly (hugepage), It greatly improves the efficiency of memory allocation .hugepage The configuration of is relatively simple , The official configuration process uses 2MB Large page memory , there 28672 It refers to the distribution of 28672 individual 2MB Large page memory , That's one node Corresponding 56GB Of memory , A total of 112GB Of memory , The memory here can be adjusted according to the size of the machine . But if it's less than 1GB It may cause startup error .

Single CPU The system can refer to dpdk Of Official documents

# for NUMA machine
$ echo 28672 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
$ echo 28672 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

$ mkdir /mnt/huge
$ mount -t hugetlbfs nodev /mnt/huge

#  If you need to start the machine and mount it automatically, you can do so in 
$ echo "nodev /mnt/huge hugetlbfs defaults 0 0" >> /etc/fstab

#  After the configuration is completed, we can see that the memory utilization immediately increases 
$ free -g	#  Before configuration 
              total        used        free      shared  buff/cache   available
Mem:            125           1         122           0           1         123
$ free -g	#  After the configuration 
              total        used        free      shared  buff/cache   available
Mem:            125         113          10           0           1          11
#  Use numactl Looking at the memory status, you can also see that it is indeed on both sides CPU Each memory is allocated 56G
$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18
node 0 size: 64184 MB
node 0 free: 4687 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19
node 1 size: 64494 MB
node 1 free: 5759 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

2.1.4 To configure ulimit

By default, the system's ulimit Limit the number of open file descriptors. If it is too small, it will affect dpvs The normal operation , So we turn it up a little :

$ ulimit -n 655350
$ echo "ulimit -n 655350" >> /etc/rc.local
$ chmod a+x /etc/rc.local

2.2 Mount the driver module

First, we need to mount the system that we have compiled dpdk drive (PMD drive ), Then change the default driver used by the network card to the one compiled here PMD drive

$ modprobe uio
$ insmod /path/to/dpdk-stable-18.11.2/build/kmod/igb_uio.ko
$ insmod /path/to/dpdk-stable-18.11.2/build/kmod/rte_kni.ko carrier=on

It should be noted that carrier Parameters from DPDK v18.11 Version starts to add , The default value is off. We need to load rte_kni.ko Take the module with you carrier=on Parameter can make KNI The equipment is working properly .

stay dpdk-stable-18.11.2/usertools There are some directories to help us install and use dpdk Script for , We can use them to reduce configuration complexity , Here we can use dpdk-devbind.py Script to change the driver of the network card

#  First we shut down we need to load PMD Driver network card 
$ ifdown eth{
    2,3,4,5}

#  Check the status of the network card , Pay special attention to the corresponding network card PCI ID, Only some useful output results are intercepted below 
$ ./usertools/dpdk-devbind.py --status
Network devices using kernel driver
===================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth2 drv=ixgbe unused=igb_uio
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth3 drv=ixgbe unused=igb_uio
0000:82:00.0 'Ethernet 10G 2P X520 Adapter 154d' if=eth4 drv=ixgbe unused=igb_uio
0000:82:00.1 'Ethernet 10G 2P X520 Adapter 154d' if=eth5 drv=ixgbe unused=igb_uio

From the above output results, we can see that the current network card uses ixgbe drive , And our goal is to use it igb_uio drive . Note that if there are too many network cards in the system at this time , We recorded earlier adapter name -MAC Address -PCI ID Three parameters can come in handy .

#  For the need to use dpvs The network card of is loaded with a specific driver 
$ ./usertools/dpdk-devbind.py -b igb_uio 0000:04:00.0
$ ./usertools/dpdk-devbind.py -b igb_uio 0000:04:00.1
$ ./usertools/dpdk-devbind.py -b igb_uio 0000:82:00.0
$ ./usertools/dpdk-devbind.py -b igb_uio 0000:82:00.1

#  Check again whether the load is successful , Only some useful output results are intercepted below 
$ ./usertools/dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
0000:82:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=ixgbe
0000:82:00.1 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=ixgbe

2.3 DPVS install

$ cd /path/to/dpdk-stable-18.11.2/
$ export RTE_SDK=$PWD
$ cd /path/to/dpvs
$ make 
$ make install
#  see bin Binary files in the directory 
$ ls /path/to/dpvs/bin/
dpip  dpvs  ipvsadm  keepalived

#  Pay attention to see make Prompt information in the process , In especial keepalived part , If the following part appears, it means IPVS Support IPv6
Keepalived configuration
------------------------
Keepalived version       : 2.0.19
Compiler                 : gcc
Preprocessor flags       : -D_GNU_SOURCE -I/usr/include/libnl3
Compiler flags           : -g -g -O2 -fPIE -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -O2
Linker flags             : -pie -Wl,-z,relro -Wl,-z,now
Extra Lib                : -lm -lcrypto -lssl -lnl-genl-3 -lnl-3
Use IPVS Framework       : Yes
IPVS use libnl           : Yes
IPVS syncd attributes    : No
IPVS 64 bit stats        : No



#  To facilitate management, you can soft link related operation commands to /sbin For global execution 
$ ln -s /path/to/dpvs/bin/dpvs /sbin/dpvs
$ ln -s /path/to/dpvs/bin/dpip /sbin/dpip
$ ln -s /path/to/dpvs/bin/ipvsadm /sbin/ipvsadm
$ ln -s /path/to/dpvs/bin/keepalived /sbin/keepalived

#  Check dpvs Whether the relevant commands work properly , Note that other commands should be in dpvs The process cannot be used normally until it is started 
$ dpvs -v
dpvs version: 1.8-10, build on 2021.07.26.15:34:26

2.4 To configure dpvs.conf

stay dpvs/conf There are various configurations under the directory dpvs Sample configuration file , At the same time dpvs.conf.items All parameters are recorded in the file , It is recommended that students read all the basic grammar and then configure it . default dpvs The configuration file to start is /etc/dpvs.conf.

Here is a brief summary of several parts (! Is the annotation symbol ):

  • The format of the log can be manually adjusted to DEBUG And modify the location of the log output to facilitate the location of the problem

    global_defs {
        log_level   DEBUG
        log_file    /path/to/dpvs/logs/dpvs.log
    }
    
  • If you need to define multiple network cards , You can refer to this configuration

    netif_defs {
        <init> pktpool_size     1048575
        <init> pktpool_cache    256
    
        <init> device dpdk0 {
            rx {
                queue_number        16
                descriptor_number   1024
                rss                 all
            }
            tx {
                queue_number        16
                descriptor_number   1024
            }
            fdir {
                mode                perfect
                pballoc             64k
                status              matched
            }
            kni_name                dpdk0.kni
        }
    
        <init> device dpdk1 {
            rx {
                queue_number        16
                descriptor_number   1024
                rss                 all
            }
            tx {
                queue_number        16
                descriptor_number   1024
            }
            fdir {
                mode                perfect
                pballoc             64k
                status              matched
            }
            kni_name                dpdk1.kni
        }
    
        <init> device dpdk2 {
            rx {
                queue_number        16
                descriptor_number   1024
                rss                 all
            }
            tx {
                queue_number        16
                descriptor_number   1024
            }
            fdir {
                mode                perfect
                pballoc             64k
                status              matched
            }
            kni_name                dpdk2.kni
        }
    
        <init> device dpdk3 {
            rx {
                queue_number        16
                descriptor_number   1024
                rss                 all
            }
            tx {
                queue_number        16
                descriptor_number   1024
            }
            fdir {
                mode                perfect
                pballoc             64k
                status              matched
            }
            kni_name                dpdk3.kni
        }
    
    }
    
  • The same transceiver queue of multiple network cards shares the same CPU

        <init> worker cpu1 {
            type    slave
            cpu_id  1
            port    dpdk0 {
                rx_queue_ids     0
                tx_queue_ids     0
            }
            port    dpdk1 {
                rx_queue_ids     0
                tx_queue_ids     0
            }
            port    dpdk2 {
                rx_queue_ids     0
                tx_queue_ids     0
            }
            port    dpdk3 {
                rx_queue_ids     0
                tx_queue_ids     0
            }
        }
    
  • If you need to specify a single CPU To deal with it ICMP Data packets , Can be in this worker Add icmp_redirect_core

        <init> worker cpu16 {
            type    slave
            cpu_id  16
            icmp_redirect_core
            port    dpdk0 {
                rx_queue_ids     15
                tx_queue_ids     15
            }
        }
    

DPVS After the process is started, it can be directly in Linux Configure the corresponding network card in the network configuration file of the system , Use it and others eth0 Such network cards are exactly the same .

After successful operation , Use dpip Command and normal ipifconfig Commands can see the corresponding dpdk network card ,IPv4 and IPv6 The network can be used normally . The following figure captures only part of the information ,IP and MAC Information desensitized ,IPv6 The information has been removed .

$ dpip link show
1: dpdk0: socket 0 mtu 1500 rx-queue 16 tx-queue 16
    UP 10000 Mbps full-duplex auto-nego
    addr AA:BB:CC:23:33:33 OF_RX_IP_CSUM OF_TX_IP_CSUM OF_TX_TCP_CSUM OF_TX_UDP_CSUM

$ ip a
67: dpdk0.kni: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether AA:BB:CC:23:33:33 brd ff:ff:ff:ff:ff:ff
    inet 1.1.1.1/24 brd 1.1.1.255 scope global dpdk0.kni
       valid_lft forever preferred_lft forever
       
$ ifconfig dpdk0.kni
dpdk0.kni: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 1.1.1.1  netmask 255.255.254.0  broadcast 1.1.1.255
        ether AA:BB:CC:23:33:33  txqueuelen 1000  (Ethernet)
        RX packets 1790  bytes 136602 (133.4 KiB)
        RX errors 0  dropped 52  overruns 0  frame 0
        TX packets 115  bytes 24290 (23.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

3、 To configure FullNat

To verify our DPVS Be able to work normally , Here we refer to the official The configuration document , First configure the simplest dual arm mode FNAT. Refer to the official architecture diagram and modify the IP Address information we can get the following simple architecture diagram .

In this mode, you do not need to use the system's own ip、ifconfig Wait for the tools DPVS Virtual kni Configure the network card

[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-yWrJXOM5-1645782758324)(https://resource.tinychen.com/20210728123202.svg)]

Here we use dpdk2 Network card as wan mouth ,dpdk0 Network card as lan mouth

#  First of all, we put VIP 10.0.96.204  Add to dpdk2 network card (wan) On 
$ dpip addr add 10.0.96.204/32 dev dpdk2

#  Then we need to add two routes , Divided into yes wan Route and destination of the interface network segment RS Routing of machine network segment 
$ dpip route add 10.0.96.0/24 dev dpdk2
$ dpip route add 192.168.229.0/24 dev dpdk0
#  It is better to add a default route guarantee to the gateway ICMP The packet return can run through 
$ dpip route add default via 10.0.96.254 dev dpdk2

#  Use RR The algorithm establishes forwarding rules 
# add service <VIP:vport> to forwarding, scheduling mode is RR.
# use ipvsadm --help for more info.
$ ipvsadm -A -t 10.0.96.204:80 -s rr

#  Here we only add one for the convenience of testing RS
# add two RS for service, forwarding mode is FNAT (-b)
$ ipvsadm -a -t 10.0.96.204:80 -r 192.168.229.1 -b

#  add to LocalIP Go to the Internet ,FNAT The pattern here requires 
# add at least one Local-IP (LIP) for FNAT on LAN interface
$ ipvsadm --add-laddr -z 192.168.229.204 -t 10.0.96.204:80 -F dpdk0


#  Then let's look at the effect 
$ dpip route show
inet 192.168.229.204/32 via 0.0.0.0 src 0.0.0.0 dev dpdk0 mtu 1500 tos 0 scope host metric 0 proto auto
inet 10.0.96.204/32 via 0.0.0.0 src 0.0.0.0 dev dpdk2 mtu 1500 tos 0 scope host metric 0 proto auto
inet 10.0.96.0/24 via 0.0.0.0 src 0.0.0.0 dev dpdk2 mtu 1500 tos 0 scope link metric 0 proto auto
inet 192.168.229.0/24 via 0.0.0.0 src 0.0.0.0 dev dpdk0 mtu 1500 tos 0 scope link metric 0 proto auto
inet 0.0.0.0/0 via 10.0.96.254 src 0.0.0.0 dev dpdk2 mtu 1500 tos 0 scope global metric 0 proto auto

$ dpip addr show
inet 10.0.96.204/32 scope global dpdk2
     valid_lft forever preferred_lft forever
inet 192.168.229.204/32 scope global dpdk0
     valid_lft forever preferred_lft forever

$ ipvsadm  -ln
IP Virtual Server version 0.0.0 (size=0)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.96.204:80 rr
  -> 192.168.229.1:80              FullNat 1      0          0
$ ipvsadm  -G
VIP:VPORT            TOTAL    SNAT_IP              CONFLICTS  CONNS
10.0.96.204:80    1
                              192.168.229.204       0          0

And then we were in RS It starts a nginx, Set back IP And port number , Look at the effect :

    server {
        listen 80 default;

        location / {
            default_type text/plain;
            return 200 "Your IP and port is $remote_addr:$remote_port\n";
        }

    }

Direct pair VIP Use ping and curl Command to test :

$ ping -c4 10.0.96.204
PING 10.0.96.204 (10.0.96.204) 56(84) bytes of data.
64 bytes from 10.0.96.204: icmp_seq=1 ttl=54 time=47.2 ms
64 bytes from 10.0.96.204: icmp_seq=2 ttl=54 time=48.10 ms
64 bytes from 10.0.96.204: icmp_seq=3 ttl=54 time=48.5 ms
64 bytes from 10.0.96.204: icmp_seq=4 ttl=54 time=48.5 ms

--- 10.0.96.204 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 47.235/48.311/48.969/0.684 ms

$ curl 10.0.96.204
Your IP and port is 192.168.229.204:1033

It can be found that no matter what machine it is on, it will only return LIP Of IP And port number , If you need to get the user's real IP, Then you need to install TOA modular

4、RS install TOA modular

Currently, the open source community provides toa There are many versions of modules , Here we want to ensure compatibility , Use it directly dpvs Official toa and uoa modular , According to their official description , their toa The module is from Alibaba TOA Split it out

TOA source code is included into DPVS project(in directory kmod/toa) since v1.7 to support IPv6 and NAT64. It is derived from the Alibaba TOA. For IPv6 applications which need client’s real IP address, we suggest to use this TOA version.

Because of what we have here RS The machine and DPVS The machines are all using versions CentOS7 System , So we can go straight to DPVS Compile on the machine toa modular , Then copy to each RS Use on the machine

$ cd /path/to/dpvs/kmod/toa/
$ make

After the successful compilation, a... Will be generated in the current directory toa.ko Module file , This is the document we need , Use it directly insmod Command to load the module and then check

$ insmod toa.ko
$ lsmod  | grep toa
toa                   279641  0

Make sure that the power on load module , Can be in rc.local Add the following instructions to the file

/usr/sbin/insmod /path/to/toa.ko
# for example:
# /usr/sbin/insmod /home/dpvs/kmod/toa/toa.ko

except toa Outside the module , And for UDP Agreed uoa modular , And the above toa The module compilation and installation process is completely consistent , No more details here .

stay RS The machine is loaded with toa After the module, we use it again curl The test results :

$ curl 10.0.96.204
Your IP and port is 172.16.0.1:62844

thus , Whole DPVS Of FullNat Even if the deployment is completed and the mode works normally . because DPVS Supports a wide range of configuration combinations , I will write another article about IPv6、nat64、keepalived、bonding、Master/Backup Configuration of mode .

原网站

版权声明
本文为[tinychen777]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206252054159375.html