当前位置:网站首页>[deep learning lightweight backbone] 2022 edgevits CVPR
[deep learning lightweight backbone] 2022 edgevits CVPR
2022-06-25 07:51:00 【Shuo Chi】
List of articles
2022 EdgeViTs CVPR
Hong Kong Chinese & Samsung proposed EdgeViT: Lightweight vision Transformer New job
Thesis link : https://arxiv.org/abs/2205.03436 100MB Download carefully . Open page slow
Paper code : Not yet announced
1. brief introduction
1.1 Abstract
In the field of computer vision , be based on Self-attention Model of ( Such as (ViTs)) Has become a CNN A highly competitive architecture beyond . Although more and more powerful variants have higher and higher recognition accuracy , But because of Self-attention Quadratic complexity of , The existing ViT There are high requirements in terms of calculation and model size . Although before CNN Some successful design choices ( for example , Convolution and hierarchical structure ) Has been introduced into the recent ViT in , But they are still not enough to meet the limited computing resource needs of mobile devices . This has prompted recent attempts to develop the most advanced MobileNet-v2 Light weight MobileViT, but MobileViT And MobileNet-v2 There are still performance gaps .
In this work , The author further advances this research direction , Introduced EdgeViTs, A new lightweight ViTs family , It is also the first time to make it based on Self-attention The visual model achieves the best lightweight in the trade-off between accuracy and equipment efficiency CNN Performance of . This is done by introducing a Self-attention And convolution of the optimal integration of the high-cost local-global-local(LGL) Information exchange bottleneck . For mobile device specific evaluation , Do not rely on inaccurate proxies, Such as FLOPs Quantity or parameter of , Instead, a practical approach that directly focuses on equipment delay and energy efficiency .
1.2 The problem is
The article points out that , Currently based VIT, Make lightweight operation , Generally speaking, there are 3 Kind of
- Use with spatial resolution ( namely token Sequence length ) Layered architecture , Step down sampling at each stage
- Used to control input token Local group self attention mechanism with sequence length and parameter sharing
- Pooled attention scheme with factor pairs key and value Sub sampling
The design of these ? The trend is to design more complex , More powerful ViT, To challenge better performance CNN, However, it can not meet the practical effect of mobile phone operation
- Reasoning efficiency needs to be high ( For example, low latency and energy consumption ), In this way, the operation cost is generally affordable , Applications on more devices can support applications , This is the direct indicator that we really care about in practice .
- The model size ( Parameter quantity ) Affordable for today's mobile devices .
- The simplicity of implementation is also very important in practical application . For a broader deployment , It is necessary to use a common deep learning framework ( Such as ONNX、TensorRT and TorchScript) Standard computing operations that support and optimize efficiently implement the model , There is no need to spend a high cost to design each network framework .
The contribution of this paper is as follows :
(1) We study lightweight from the perspective of deployment and execution on actual devices ViT The design of the ;
(2) For optimal scalability and deployment , We propose a new efficient ViT family , be called EdgeViT, It is designed based on the optimal decomposition of the self attention mechanism using the standard initial module .
(3) About the performance on the device , In order to realize the relevance of deployment , We directly consider the delay and energy consumption of different models , Instead of relying on other standards , Such as FLOPs Quantity or parameter quantity .
2. The Internet
2.1 overall design

- chart a Is the overall framework , Be similar to resnet structure . In order to design for mobile / Lightweight edge devices ViT, We use the latest ViT The hierarchical pyramid network structure used in the variant ,
- chart b An overhead efficient local - overall situation - Local (LGL)bottleneck,LGL The sparse attention module further reduces the cost of self attention
- chart c, Better accuracy and delay balance .
2.2 Local - overall situation - Local bottleneck(LGL)
- Self-attention It has been proved to be a very effective method to learn global information or long-distance spatial dependence , This is the key to visual recognition .
- On the other hand , Because the image has a high degree of spatial redundancy ( for example , Nearby Patch Semantically similar ), Focus on all spaces Patch On , Even in a down sampled feature map , It's also inefficient .
therefore , As previously performed at each spatial location Self-attention Of Transformer Block comparison ,LGL Bottleneck Only for input Token Subset calculation of Self-attention, But it supports complete spatial interaction , As in standard Multi-Head Self-attention(MHSA) in . It will reduce Token Scope of action , At the same time, the underlying information flow of modeling global and local context is also preserved .
Here's the introduction of 3 An effective operation :
- Local aggregation: Integrate only from local approximations Token Local aggregation of signals
- Global sparse attention: Model a representative set of Token The long-term relationship between , Each of them Token Are considered to represent a local window ;
- Local propagation: Spread the global context information learned by the delegate to the non representative with the same window Token.

Combine these ,LGL Bottleneck Any pair in the same feature map can be mapped at low computational cost Token Exchange information between . Each component will be described in detail below :
Local aggregation
For each Token, utilize Depth-wise and Point-wise Convolution is in the size of k×k Aggregate information in local windows of ( chart 3(a)).
class LocalAgg(nn.Module):
""" Local modules ,LocalAgg Convolution can effectively extract local features In order to reduce the amount of computation , Use Point by point convolution + Deep separable convolution implementation """
def __init__(self, channels):
super(LocalAgg, self).__init__()
self.bn = nn.BatchNorm2d(channels)
# Point by point convolution , It's equivalent to a fully connected layer . Increase nonlinearity , Improve the ability of feature extraction
self.pointwise_conv_0 = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
# Depth separates the convolution
self.depthwise_conv = nn.Conv2d(channels, channels, padding=1, kernel_size=3, groups=channels, bias=False)
# normalization
self.pointwise_prenorm_1 = nn.BatchNorm2d(channels)
# Point by point convolution , It's equivalent to a fully connected layer , Increase nonlinearity , Improve the ability of feature extraction
self.pointwise_conv_1 = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.bn(x)
x = self.pointwise_conv_0(x)
x = self.depthwise_conv(x)
x = self.pointwise_prenorm_1(x)
x = self.pointwise_conv_1(x)
return x
Global sparse attention
For the sparse representation of uniform distribution in space Token Set for sampling , Every r×r The window has a representative Token. here ,r Represents the sub sample rate . then , Only for those selected Token application Self-attention( chart 3(b)). This is different from all existing ViTs Different , There? , All the space Token All as Self-attention In calculation query Be involved .
class GlobalSparseAttention(nn.Module):
""" Global module , Select a specific tokens, Perform global actions """
def __init__(self, channels, r, heads):
""" Args: channels: The channel number r: Down sampling rate heads: Number of attention heads Here we use the multiple attention mechanism ,MHSA,multi-head self-attention """
super(GlobalSparseAttention, self).__init__()
#
self.head_dim = channels // heads
# Expanding
self.scale = self.head_dim ** -0.5
self.num_heads = heads
# Using average pooling , For feature extraction
self.sparse_sampler = nn.AvgPool2d(kernel_size=1, stride=r)
# Calculation qkv
self.qkv = nn.Conv2d(channels, channels * 3, kernel_size=1, bias=False)
def forward(self, x):
x = self.sparse_sampler(x)
B, C, H, W = x.shape
q, k, v = self.qkv(x).view(B, self.num_heads, -1, H * W).split([self.head_dim,self.head_dim,self.head_dim],dim=2)
# Calculate the characteristic diagram attention map
attn = (q.transpose(-2, -1) @ k).softmax(-1)
# value And characteristic diagram , Get the result of global attention
x = (v @ attn.transpose(-2, -1)).view(B, -1, H, W)
# print(x.shape)
return x
Local propagation
By transposing convolution, the representativeness Token The global context information encoded in is propagated to their adjacent Token in ( chart 3).
class LocalPropagation(nn.Module):
def __init__(self, channels, r):
super(LocalPropagation, self).__init__()
# Group normalization
self.norm = nn.GroupNorm(num_groups=1, num_channels=channels)
# Use transpose convolution recovery GlobalSparseAttention modular r Times the down sampling rate
self.local_prop = nn.ConvTranspose2d(channels,
channels,
kernel_size=r,
stride=r,
groups=channels)
# Use pointwise convolution
self.proj = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.local_prop(x)
x = self.norm(x)
x = self.proj(x)
return x
Final , LGL bottleneck It can be expressed as :
X = L o c a l A g g ( N o r m ( X i n ) ) + X i n Y = F F N ( N o r m ( X ) ) + X Z = L o c a l P r o p ( G l o b a l S p a r s e A t t e n ( N o r m ( Y ) ) ) + Y X o u t = F F N ( N o r m ( Z ) ) + Z \begin{aligned} X&=LocalAgg(Norm(X_{in}))+X_{in} \\ Y&=FFN(Norm(X))+X \\ Z&=LocalProp(GlobalSparseAtten(Norm(Y)))+Y \\ X_{out}&=FFN(Norm(Z))+Z \end{aligned} XYZXout=LocalAgg(Norm(Xin))+Xin=FFN(Norm(X))+X=LocalProp(GlobalSparseAtten(Norm(Y)))+Y=FFN(Norm(Z))+Z
LGL The code for
import torch
import torch.nn as nn
class Residual(nn.Module):
""" Residual network """
def __init__(self, module):
super().__init__()
self.module = module
def forward(self, x):
return x + self.module(x)
class ConditionalPositionalEncoding(nn.Module):
""" Condition code information """
def __init__(self, channels):
super(ConditionalPositionalEncoding, self).__init__()
self.conditional_positional_encoding = nn.Conv2d(channels, channels, kernel_size=3, padding=1, groups=channels,
bias=False)
def forward(self, x):
x = self.conditional_positional_encoding(x)
return x
class MLP(nn.Module):
""" FFN modular """
def __init__(self, channels):
super(MLP, self).__init__()
expansion = 4
self.mlp_layer_0 = nn.Conv2d(channels, channels * expansion, kernel_size=1, bias=False)
self.mlp_act = nn.GELU()
self.mlp_layer_1 = nn.Conv2d(channels * expansion, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.mlp_layer_0(x)
x = self.mlp_act(x)
x = self.mlp_layer_1(x)
return x
class LocalAgg(nn.Module):
""" Local modules ,LocalAgg Convolution can effectively extract local features In order to reduce the amount of computation , Use Point by point convolution + Deep separable convolution implementation """
def __init__(self, channels):
super(LocalAgg, self).__init__()
self.bn = nn.BatchNorm2d(channels)
# Point by point convolution , It's equivalent to a fully connected layer . Increase nonlinearity , Improve the ability of feature extraction
self.pointwise_conv_0 = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
# Depth separates the convolution
self.depthwise_conv = nn.Conv2d(channels, channels, padding=1, kernel_size=3, groups=channels, bias=False)
# normalization
self.pointwise_prenorm_1 = nn.BatchNorm2d(channels)
# Point by point convolution , It's equivalent to a fully connected layer , Increase nonlinearity , Improve the ability of feature extraction
self.pointwise_conv_1 = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.bn(x)
x = self.pointwise_conv_0(x)
x = self.depthwise_conv(x)
x = self.pointwise_prenorm_1(x)
x = self.pointwise_conv_1(x)
return x
class GlobalSparseAttention(nn.Module):
""" Global module , Select a specific tokens, Perform global actions """
def __init__(self, channels, r, heads):
""" Args: channels: The channel number r: Down sampling rate heads: Number of attention heads Here we use the multiple attention mechanism ,MHSA,multi-head self-attention """
super(GlobalSparseAttention, self).__init__()
#
self.head_dim = channels // heads
# Expanding
self.scale = self.head_dim ** -0.5
self.num_heads = heads
# Using average pooling , For feature extraction
self.sparse_sampler = nn.AvgPool2d(kernel_size=1, stride=r)
# Calculation qkv
self.qkv = nn.Conv2d(channels, channels * 3, kernel_size=1, bias=False)
def forward(self, x):
x = self.sparse_sampler(x)
B, C, H, W = x.shape
q, k, v = self.qkv(x).view(B, self.num_heads, -1, H * W).split([self.head_dim, self.head_dim, self.head_dim],
dim=2)
# Calculate the characteristic diagram attention map
attn = (q.transpose(-2, -1) @ k).softmax(-1)
# value And characteristic diagram , Get the result of global attention
x = (v @ attn.transpose(-2, -1)).view(B, -1, H, W)
# print(x.shape)
return x
class LocalPropagation(nn.Module):
def __init__(self, channels, r):
super(LocalPropagation, self).__init__()
# Group normalization
self.norm = nn.GroupNorm(num_groups=1, num_channels=channels)
# Use transpose convolution recovery GlobalSparseAttention modular r Times the down sampling rate
self.local_prop = nn.ConvTranspose2d(channels,
channels,
kernel_size=r,
stride=r,
groups=channels)
# Use pointwise convolution
self.proj = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.local_prop(x)
x = self.norm(x)
x = self.proj(x)
return x
class LGL(nn.Module):
def __init__(self, channels, r, heads):
super(LGL, self).__init__()
self.cpe1 = ConditionalPositionalEncoding(channels)
self.LocalAgg = LocalAgg(channels)
self.mlp1 = MLP(channels)
self.cpe2 = ConditionalPositionalEncoding(channels)
self.GlobalSparseAttention = GlobalSparseAttention(channels, r, heads)
self.LocalPropagation = LocalPropagation(channels, r)
self.mlp2 = MLP(channels)
def forward(self, x):
# 1. after Position coding operation
x = self.cpe1(x) + x
# 2. After the first step Local operation
x = self.LocalAgg(x) + x
# 3. Through a feedforward network
x = self.mlp1(x) + x
# 4. After a position coding operation
x = self.cpe2(x) + x
# 5. Operation after a global capture . Length and width are reduced r times . And then through a
# 6. Through a Local operation Department
x = self.LocalPropagation(self.GlobalSparseAttention(x)) + x
# 7. Through a feedforward network
x = self.mlp2(x) + x
return x
if __name__ == '__main__':
# 64 passageway , The size of the picture is 32*32
x = torch.randn(size=(1, 64, 32, 32))
# 64 passageway , Down sampling 2 times ,8 A head of attention
model = LGL(64, 2, 8)
out = model(x)
print(out.shape)
3. Code
import torch
import torch.nn as nn
# edgevits Configuration information
edgevit_configs = {
'XXS': {
'channels': (36, 72, 144, 288),
'blocks': (1, 1, 3, 2),
'heads': (1, 2, 4, 8)
}
,
'XS': {
'channels': (48, 96, 240, 384),
'blocks': (1, 1, 2, 2),
'heads': (1, 2, 4, 8)
}
,
'S': {
'channels': (48, 96, 240, 384),
'blocks': (1, 2, 3, 2),
'heads': (1, 2, 4, 8)
}
}
HYPERPARAMETERS = {
'r': (4, 2, 2, 1)
}
class Residual(nn.Module):
""" Residual network """
def __init__(self, module):
super().__init__()
self.module = module
def forward(self, x):
return x + self.module(x)
class ConditionalPositionalEncoding(nn.Module):
""" """
def __init__(self, channels):
super(ConditionalPositionalEncoding, self).__init__()
self.conditional_positional_encoding = nn.Conv2d(channels, channels, kernel_size=3, padding=1, groups=channels,
bias=False)
def forward(self, x):
x = self.conditional_positional_encoding(x)
return x
class MLP(nn.Module):
""" FFN modular """
def __init__(self, channels):
super(MLP, self).__init__()
expansion = 4
self.mlp_layer_0 = nn.Conv2d(channels, channels * expansion, kernel_size=1, bias=False)
self.mlp_act = nn.GELU()
self.mlp_layer_1 = nn.Conv2d(channels * expansion, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.mlp_layer_0(x)
x = self.mlp_act(x)
x = self.mlp_layer_1(x)
return x
class LocalAgg(nn.Module):
""" Local modules ,LocalAgg Convolution can effectively extract local features In order to reduce the amount of computation , Use Point by point convolution + Deep separable convolution implementation """
def __init__(self, channels):
super(LocalAgg, self).__init__()
self.bn = nn.BatchNorm2d(channels)
# Point by point convolution , It's equivalent to a fully connected layer . Increase nonlinearity , Improve the ability of feature extraction
self.pointwise_conv_0 = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
# Depth separates the convolution
self.depthwise_conv = nn.Conv2d(channels, channels, padding=1, kernel_size=3, groups=channels, bias=False)
# normalization
self.pointwise_prenorm_1 = nn.BatchNorm2d(channels)
# Point by point convolution , It's equivalent to a fully connected layer , Increase nonlinearity , Improve the ability of feature extraction
self.pointwise_conv_1 = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.bn(x)
x = self.pointwise_conv_0(x)
x = self.depthwise_conv(x)
x = self.pointwise_prenorm_1(x)
x = self.pointwise_conv_1(x)
return x
class GlobalSparseAttention(nn.Module):
""" Global module , Select a specific tokens, Perform global actions """
def __init__(self, channels, r, heads):
""" Args: channels: The channel number r: Down sampling rate heads: Number of attention heads Here we use the multiple attention mechanism ,MHSA,multi-head self-attention """
super(GlobalSparseAttention, self).__init__()
#
self.head_dim = channels // heads
# Expanding
self.scale = self.head_dim ** -0.5
self.num_heads = heads
# Using average pooling , For feature extraction
self.sparse_sampler = nn.AvgPool2d(kernel_size=1, stride=r)
# Calculation qkv
self.qkv = nn.Conv2d(channels, channels * 3, kernel_size=1, bias=False)
def forward(self, x):
x = self.sparse_sampler(x)
B, C, H, W = x.shape
q, k, v = self.qkv(x).view(B, self.num_heads, -1, H * W).split([self.head_dim, self.head_dim, self.head_dim],
dim=2)
# Calculate the characteristic diagram attention map
attn = (q.transpose(-2, -1) @ k).softmax(-1)
# value And characteristic diagram , Get the result of global attention
x = (v @ attn.transpose(-2, -1)).view(B, -1, H, W)
# print(x.shape)
return x
class LocalPropagation(nn.Module):
def __init__(self, channels, r):
super(LocalPropagation, self).__init__()
# Group normalization
self.norm = nn.GroupNorm(num_groups=1, num_channels=channels)
# Use transpose convolution recovery GlobalSparseAttention modular r Times the down sampling rate
self.local_prop = nn.ConvTranspose2d(channels,
channels,
kernel_size=r,
stride=r,
groups=channels)
# Use pointwise convolution
self.proj = nn.Conv2d(channels, channels, kernel_size=1, bias=False)
def forward(self, x):
x = self.local_prop(x)
x = self.norm(x)
x = self.proj(x)
return x
class LGL(nn.Module):
def __init__(self, channels, r, heads):
super(LGL, self).__init__()
self.cpe1 = ConditionalPositionalEncoding(channels)
self.LocalAgg = LocalAgg(channels)
self.mlp1 = MLP(channels)
self.cpe2 = ConditionalPositionalEncoding(channels)
self.GlobalSparseAttention = GlobalSparseAttention(channels, r, heads)
self.LocalPropagation = LocalPropagation(channels, r)
self.mlp2 = MLP(channels)
def forward(self, x):
# 1. after Position coding operation
x = self.cpe1(x) + x
# 2. After the first step Local operation
x = self.LocalAgg(x) + x
# 3. Through a feedforward network
x = self.mlp1(x) + x
# 4. After a position coding operation
x = self.cpe2(x) + x
# 5. Operation after a global capture . Length and width are reduced r times . And then through a
# 6. Through a Local operation Department
x = self.LocalPropagation(self.GlobalSparseAttention(x)) + x
# 7. Through a feedforward network
x = self.mlp2(x) + x
return x
class DownSampleLayer(nn.Module):
def __init__(self, dim_in, dim_out, r):
super(DownSampleLayer, self).__init__()
self.downsample = nn.Conv2d(dim_in,
dim_out,
kernel_size=r,
stride=r)
self.norm = nn.GroupNorm(num_groups=1, num_channels=dim_out)
def forward(self, x):
x = self.downsample(x)
x = self.norm(x)
return x
# if __name__ == '__main__':
# # 64 passageway , The size of the picture is 32*32
# x = torch.randn(size=(1, 64, 32, 32))
# # 64 passageway , Down sampling 2 times ,8 A head of attention
# model = LGL(64, 2, 8)
# out = model(x)
# print(out.shape)
class EdgeViT(nn.Module):
def __init__(self, channels, blocks, heads, r=[4, 2, 2, 1], num_classes=1000, distillation=False):
super(EdgeViT, self).__init__()
self.distillation = distillation
l = []
in_channels = 3
# Main part
for stage_id, (num_channels, num_blocks, num_heads, sample_ratio) in enumerate(zip(channels, blocks, heads, r)):
# print(num_channels,num_blocks,num_heads,sample_ratio)
# print(in_channels)
l.append(DownSampleLayer(dim_in=in_channels, dim_out=num_channels, r=4 if stage_id == 0 else 2))
for _ in range(num_blocks):
l.append(LGL(channels=num_channels, r=sample_ratio, heads=num_heads))
in_channels = num_channels
self.main_body = nn.Sequential(*l)
self.pooling = nn.AdaptiveAvgPool2d(1)
self.classifier = nn.Linear(in_channels, num_classes, bias=True)
if self.distillation:
self.dist_classifier = nn.Linear(in_channels, num_classes, bias=True)
# print(self.main_body)
def forward(self, x):
# print(x.shape)
x = self.main_body(x)
x = self.pooling(x).flatten(1)
if self.distillation:
x = self.classifier(x), self.dist_classifier(x)
if not self.training:
x = 1 / 2 * (x[0] + x[1])
else:
x = self.classifier(x)
return x
def EdgeViT_XXS(pretrained=False):
model = EdgeViT(**edgevit_configs['XXS'])
if pretrained:
raise NotImplementedError
return model
def EdgeViT_XS(pretrained=False):
model = EdgeViT(**edgevit_configs['XS'])
if pretrained:
raise NotImplementedError
return model
def EdgeViT_S(pretrained=False):
model = EdgeViT(**edgevit_configs['S'])
if pretrained:
raise NotImplementedError
return model
if __name__ == '__main__':
x = torch.randn(size=(1, 3, 224, 224))
model = EdgeViT_S(False)
# y = model(x)
# print(y.shape)
from thop import profile
input = torch.randn(1, 3, 224, 224)
flops, params = profile(model, inputs=(input,))
print("flops:{:.3f}G".format(flops /1e9))
print("params:{:.3f}M".format(params /1e6))
Reference link
https://zhuanlan.zhihu.com/p/516209737
边栏推荐
- Storage of Galileo broadcast ephemeris in rtklib-b33
- OpenMP入门
- 海思3559 sample解析:vio
- Without "rice", you can cook "rice". Strategy for retrieving missing ground points under airborne lidar forest using "point cloud intelligent mapping"
- Modular programming of LCD1602 LCD controlled by single chip microcomputer
- Microsoft Office Word 远程命令执行漏洞(CVE-2022-30190)分析与利用
- Runtime——methods成员变量,cache成员变量
- What are the problems with traditional IO? Why is zero copy introduced?
- Runtime - Methods member variable, cache member variable
- 基于地面点稀少的LiDAR点云的茂密森林蓄积量估算
猜你喜欢

年后求职找B端产品经理?差点把自己坑惨了......

Find out what informatization is, and let enterprises embark on the right path of transformation and upgrading

Modular programming of oled12864 display controlled by single chip microcomputer

el-input实现尾部加字

The fourth floor is originally the fourth floor. Let's have a look

点云智绘在智慧工地中的应用

Without "rice", you can cook "rice". Strategy for retrieving missing ground points under airborne lidar forest using "point cloud intelligent mapping"
![[little knowledge] PCB proofing process](/img/bf/f66677294a14baf08cc35d1e8c1e31.jpg)
[little knowledge] PCB proofing process

realsense d455 semantic_ Slam implements semantic octree mapping

机器学习笔记 - 时间序列的线性回归
随机推荐
C WinForm panel custom picture and text
单位转换-毫米转像素-像素转毫米
Estimation of dense forest volume based on LIDAR point cloud with few ground points
OpenMP入门
[Video] ffplay uses MJPEG format to play USB camera
How to use printf of 51 single chip microcomputer
C#中如何调整图像大小
基于激光雷达的林业调查常用术语及含义锦集
Atlassian confluence漏洞分析合集
el-input实现尾部加字
CAN总线工作状况和信号质量“体检”
使用报文和波形记录分析仪RoyalScope的帧统计功能排查CAN总线偶发性故障
Different paths ii[dynamic planning improvement for DFS]
NSIS silent installation vs2013 runtime
Analysis and utilization of Microsoft Office Word remote command execution vulnerability (cve-2022-30190)
Five causes of PCB board deformation and six solutions 2021-10-08
What are the benefits of reserving process edges for PCB production? 2021-10-25
VectorDraw Web Library 10.10
Modular programming of wireless transmission module nRF905 controlled by single chip microcomputer
基于Anaconda的模块安装与注意事项