当前位置:网站首页>torchvision.models._utils.IntermediateLayerGetter使用教程
torchvision.models._utils.IntermediateLayerGetter使用教程
2022-06-27 09:35:00 【jjw_zyfx】
话不多少看例子
import torch
import torchvision
m = torchvision.models.resnet18(pretrained=True)
print(m)
new_m = torchvision.models._utils.IntermediateLayerGetter(m,
{
'layer1': 'feat1', 'layer3': 'feat2'})
out = new_m(torch.rand(1, 3, 224, 224))
print([(k, v.shape) for k, v in out.items()])
# 输出结果为:
# [('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]
# 由于结果太长就在这说明下:IntermediateLayerGetter的第一个参数就是指模型,即这里是resnet18,
# 第二个参数是个字典,其中'layer1': 'feat1',中的layer1代表resnet18中的layer1即只取到
# resnet18中的layer1层即可,feat1就是你想要自己构建的网络的小模块名。'layer3': 'feat2'
# 同理 layer3表示你要在resnet18中从头开始到layer3层(包含layer3层)结束,并用feat2作为小模块名
完整的输出结果:
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=1000, bias=True)
)
[('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]
边栏推荐
- R语言使用econocharts包创建微观经济或宏观经济图、demand函数可视化需求曲线(demand curve)、自定义配置demand函数的参数丰富可视化效果
- Imx8qxp DMA resources and usage (unfinished)
- Rockermq message sending mode
- 如何获取GC(垃圾回收器)的STW(暂停)时间?
- 初步认识pytorch
- 视频文件太大?使用FFmpeg来无损压缩它
- ucore lab4
- Tdengine invitation: be a superhero who uses technology to change the world and become TD hero
- js的数组拼接「建议收藏」
- 使用aspose-slides将ppt转pdf
猜你喜欢
ucore lab5
Advanced mathematics Chapter 7 differential equations
Privacy computing fat offline prediction
使用aspose-slides将ppt转pdf
win10为任意文件添加右键菜单
HiTek电源维修X光机高压发生器维修XR150-603-02
如何获取GC(垃圾回收器)的STW(暂停)时间?
Hitek power supply maintenance X-ray machine high voltage generator maintenance xr150-603-02
有关二叉树的一些练习题
Semi supervised learning—— Π- Introduction to model, temporary assembling and mean teacher
随机推荐
【STM32】HAL库 STM32CubeMX教程十二—IIC(读取AT24C02 )
Prometheus alarm process and related time parameter description
dns备用服务器信息,dns服务器地址(dns首选和备用填多少)
Object contains copy method?
【SO官方采访】为何使用Rust的开发者如此深爱它
Google browser chropath plug-in
Shortcut key bug, reproducible (it seems that bug is the required function [funny.Gif])
隐私计算FATE-离线预测
【系统设计】邻近服务
Apache POI的读写
内存泄露的最直接表现
The markdown plug-in of the browser cannot display the picture
Reading and writing Apache poi
不容置疑,这是一个绝对精心制作的项目
Pakistani security forces killed 7 terrorists in anti-terrorism operation
Win10 add right-click menu for any file
如何获取GC(垃圾回收器)的STW(暂停)时间?
Installation and use of SVN version controller
12个网络工程师必备工具
Preliminary understanding of pytorch