当前位置:网站首页>Deep learning framework pytorch rapid development and actual combat chapter4
Deep learning framework pytorch rapid development and actual combat chapter4
2022-08-02 14:18:00 【weixin_50862344】
报错
第一个

This error should be related to the fact that the network in front of me cannot be disconnected,I deleted the directorydataFolder to run againok了
第二个
The old problem is stilldata[]改成item()
前馈神经网络
import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable
import torch.utils.data as Data
import matplotlib.pyplot as plt
# Hyper Parameters
input_size = 784
hidden_size = 500
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
# MNIST Dataset
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
test_y=test_dataset.test_labels
# Neural Network Model (1 hidden layer)
class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
net = Net(input_size, hidden_size, num_classes)
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Convert torch tensor to Variable
images = Variable(images.view(-1, 28*28))
labels = Variable(labels)
# Forward + Backward + Optimize
optimizer.zero_grad() # zero the gradient buffer
outputs = net(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.item()))
# Test the Model
correct = 0
total = 0
for images, labels in test_loader:
images = Variable(images.view(-1, 28*28))
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
# Save the Model
for i in range(1,4):
plt.imshow(train_dataset.train_data[i].numpy(), cmap='gray')
plt.title('%i' % train_dataset.train_labels[i])
plt.show()
torch.save(net.state_dict(), 'model.pkl')
test_output = net(images[:20])
pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
print('prediction number',pred_y)
print('real number',test_y[:20].numpy())
- torch.max返回最大值和索引
- 要使用torch.optim必须先构造一个Optimizer对象
(1)He must be given an include parameter(必须是 Variable对象)进行优化
net.parameters()

(2)Parameter options can be specified,It can also be directly set individually
- torchvision
(1)torchvision.datasets包含数据集(p78)
(2)torchvision.modelsContains pretrained model structures
#加载预训练的
resnet18=models.resnet18(pretrained=True)
#具有随机权重的
resnet18=models.resnet18()
(3)图片转化
自定义ConvNet
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jan 1 22:03:51 2018
@author: pc
"""
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class MNISTConvNet(nn.Module):
def __init__(self):
super(MNISTConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, 5)
self.pool2 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, input):
x = self.pool1(F.relu(self.conv1(input)))
x = self.pool2(F.relu(self.conv2(x)))
return x
net = MNISTConvNet()
print(net)
input = Variable(torch.randn(1, 1, 28, 28))
out = net(input)
print(out.size())
(2) torch.nn
层结构
(1)卷积
(2)池化
函数
位于torch.nn.functional包中(p86)
边栏推荐
- 网络安全第四次作业
- 第十单元 前后连调
- You can't accept 60% slump, there is no eligible for gain of 6000% in 2021-05-27
- The most complete ever!A collection of 47 common terms of "digital transformation", read it in seconds~
- 网页设计(新手入门)[通俗易懂]
- Mysql's case the when you how to use
- MobileNet ShuffleNet & yolov5替换backbone
- 如何解决mysql服务无法启动1069
- 第十二单元 关联序列化处理
- FFmpeg 的AVCodecContext结构体详解
猜你喜欢
随机推荐
【ROS】编译软件包packages遇到进度缓慢或卡死,使用swap
Flask-SQLAlchemy
文件加密软件有哪些?保障你的文件安全
政策利空对行情没有长期影响,牛市仍将继续 2021-05-19
世界上最大的开源基金会 Apache 是如何运作的?
RowBounds[通俗易懂]
Flask框架深入二
logback源码阅读(二)日志打印,自定义appender,encoder,pattern,converter
第十四单元 视图集及路由
【Tensorflow】AttributeError: module ‘keras.backend‘ has no attribute ‘tf‘
yolov5,yolov4,yolov3乱七八糟的
How to solve 1045 cannot log in to mysql server
Unit 4 Routing Layer
Supervision strikes again, what about the market outlook?2021-05-22
深度学习框架pytorch快速开发与实战chapter3
php开源的客服系统_在线客服源码php
Sentinel源码(一)SentinelResourceAspect
第十一单元 序列化器
WeChat Mini Program-Recent Dynamic Scrolling Implementation
世界上最大的开源基金会 Apache 是如何运作的?









