当前位置:网站首页>Pytorch deep learning practice lesson 10 / assignment (basic CNN)
Pytorch deep learning practice lesson 10 / assignment (basic CNN)
2022-07-24 07:11:00 【m0_ fifty-six million two hundred and forty-seven thousand and 】
One 、1. Model structures,

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F # use Relu function
import torch.optim as optim # Optimizer optimization
batch_size = 64
# transform Preprocessing , Transform image into image tensor
'''ToTensor: Will a ’PIL Image‘or‘numpy.ndarray’ Turn into tensor Format ,PIL Image or numpy.ndarray Of shape by (H x W x C), The scope is [0, 255]
Turn into shape by (C x H x W) The scope is [0.0, 1.0]'''
'''Normalize: Standardized functions , Use the mean and standard deviation to tensor Standardize
output[channel] = (input[channel] - mean[channel]) / std[channel]
'''
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# mnist The data set is 1*28*28 A single channel image of
# ./ Represents the current directory ../ Represents the parent directory / Represents the root directory
train_dataset = datasets.MNIST(root='../dataset/mnist',
train=True,
download=True,
transform=transform) # Training data set
train_loader = DataLoader(train_dataset,
shuffle=True,
batch_size=batch_size)
test_dataset = datasets.MNIST(root='../dataset/mnist',
train=False,
download=True,
transform=transform)
test_loader = DataLoader(test_dataset,
shuffle=False,
batch_size=batch_size)
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.pooling = torch.nn.MaxPool2d(2)
self.fc = torch.nn.Linear(320, 10)
def forward(self, x):
x = self.pooling(F.relu(self.conv1(x)))
x = self.pooling(F.relu(self.conv2(x)))
# here x The dimension is 4 Of tensor, namely (batchsize,C,H,W),x.size(0) finger batchsize Value
# Flatten Layer is used to input “ Flatten ”, That is, the multidimensional input is unidimensional , It is often used in the transition from convolution layer to fully connected layer .
x = x.view(x.size(0), -1) # -1 It's automatic calculation , by C*H*W
x = self.fc(x) # x Apply fully connected network
return x
model = Net()
# Migrate the model to GPU On
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
# Because the network model is already a little big , Therefore, a better optimization algorithm should be used in gradient descent , For example, with impulse (momentum), To optimize the training process
# Encapsulate a loop into a function
def train(epoch):
running_loss = 0.0
# By function enumerate Return each batch of data data, And index index(batch_idx), because start=0 therefore index from 0 Start
for batch_idx, data in enumerate(train_loader, 0):
inputs, targets = data # Divide the data into images and corresponding labels
inputs, targets = inputs.to(device), targets.to(device) # Migrate to GPU On
optimizer.zero_grad() # Clear the historical loss gradient
# forward
y_pred = model(inputs) # Input the training pictures into the network and get the output
# backward
loss = criterion(y_pred, targets)
loss.backward()
# update
optimizer.step() # Parameters are updated
running_loss += loss.item()
if batch_idx % 300 == 299: # Every time 300 individual mini-batches Print once
print('[%d,%5d] loss:%.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
running_loss = 0.0
def test():
correct = 0 # How much is correct
total = 0 # How much is the total
with torch.no_grad(): # The test doesn't count as a gradient
for data in test_loader: # from test_loader Get data
images, labels = data # Divide the data into images and corresponding labels
images, labels = images.to(device), labels.to(device) # Migrate to GPU On
outputs = model(images) # Get the data and make predictions
_, predicted = torch.max(outputs.data, dim=1) # Find the subscript of the maximum value along the first dimension , There are two return values , the reason being that 10 Come on , Return value
# The return value is the maximum value of each row , The other is the subscript of the maximum ( Each sample is a line , Every line has 10 Quantity )( Line No 0 Dimensions , Column is number 1 Dimensions )
total += labels.size(0) # take size The first of a tuple 0 Elements (N,1), After accumulation, the total number of test samples can be obtained
# Inferred classification and label Whether it is equal or not , It's just 1, Fake is 0, After summing, take out the scalar , After the cycle is completed, the correct number of samples can be predicted
correct += (predicted == labels).sum().item()
print("accuracy on test set:%d %% [%d/%d]" % (100 * correct / total, correct, total))
# Training
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test() # A round of training , Test round
2. Result display

Two 、1. Model structures, ( deepen )

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F # use Relu function
import torch.optim as optim # Optimizer optimization
batch_size = 64
# transform Preprocessing , Transform image into image tensor
'''ToTensor: Will a ’PIL Image‘or‘numpy.ndarray’ Turn into tensor Format ,PIL Image or numpy.ndarray Of shape by (H x W x C), The scope is [0, 255]
Turn into shape by (C x H x W) The scope is [0.0, 1.0]'''
'''Normalize: Standardized functions , Use the mean and standard deviation to tensor Standardize
output[channel] = (input[channel] - mean[channel]) / std[channel]
'''
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# mnist The data set is 1*28*28 A single channel image of
# ./ Represents the current directory ../ Represents the parent directory / Represents the root directory
train_dataset = datasets.MNIST(root='../dataset/mnist',
train=True,
download=True,
transform=transform) # Training data set
train_loader = DataLoader(train_dataset,
shuffle=True,
batch_size=batch_size)
test_dataset = datasets.MNIST(root='../dataset/mnist',
train=False,
download=True,
transform=transform)
test_loader = DataLoader(test_dataset,
shuffle=False,
batch_size=batch_size)
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=3)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=3, padding=1)
self.conv3 = torch.nn.Conv2d(20, 32, kernel_size=3)
self.pooling = torch.nn.MaxPool2d(2)
self.relu = torch.nn.ReLU()
self.fc1 = torch.nn.Linear(128, 64)
self.fc2 = torch.nn.Linear(64, 32)
self.fc3 = torch.nn.Linear(32, 10)
def forward(self, x):
# batch_size = x.size(0)
x = self.pooling(self.relu(self.conv1(x)))
x = self.pooling(self.relu(self.conv2(x)))
x = self.pooling(self.relu(self.conv3(x)))
# here x The dimension is 4 Of tensor, namely (batchsize,C,H,W),x.size(0) finger batchsize Value
# Flatten Layer is used to input “ Flatten ”, That is, the multidimensional input is unidimensional , It is often used in the transition from convolution layer to fully connected layer .
x = x.view(x.size(0), -1)
x = self.fc1(x) # x Apply fully connected network
x = self.fc2(x)
x = self.fc3(x)
return x
model = Net()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
# Because the network model is already a little big , Therefore, a better optimization algorithm should be used in gradient descent , For example, with impulse (momentum), To optimize the training process
# Encapsulate a loop into a function
def train(epoch):
running_loss = 0.0
# By function enumerate Return each batch of data data, And index index(batch_idx), because start=0 therefore index from 0 Start
for batch_idx, data in enumerate(train_loader, 0):
inputs, targets = data # Divide the data into images and corresponding labels
inputs, targets = inputs.to(device), targets.to(device) # Migrate to GPU On
optimizer.zero_grad() # Clear the historical loss gradient
# forward
y_pred = model(inputs) # Input the training pictures into the network and get the output
# backward
loss = criterion(y_pred, targets)
loss.backward()
# update
optimizer.step() # Parameters are updated
running_loss += loss.item()
if batch_idx % 300 == 299: # Every time 300 individual mini-batches Print once
print('[%d,%5d] loss:%.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
running_loss = 0.0
def test():
correct = 0 # How much is correct
total = 0 # How much is the total
with torch.no_grad(): # The test doesn't count as a gradient
for data in test_loader: # from test_loader Get data
images, labels = data # Divide the data into images and corresponding labels
images, labels = images.to(device), labels.to(device) # Migrate to GPU On
outputs = model(images) # Get the data and make predictions
_, predicted = torch.max(outputs.data, dim=1) # Find the subscript of the maximum value along the first dimension , There are two return values , the reason being that 10 Come on , Return value
# The return value is the maximum value of each row , The other is the subscript of the maximum ( Each sample is a line , Every line has 10 Quantity )( Line No 0 Dimensions , Column is number 1 Dimensions )
total += labels.size(0) # take size The first of a tuple 0 Elements (N,1), After accumulation, the total number of test samples can be obtained
# Inferred classification and label Whether it is equal or not , It's just 1, Fake is 0, After summing, take out the scalar , After the cycle is completed, the correct number of samples can be predicted
correct += (predicted == labels).sum().item()
print("accuracy on test set:%d %% [%d/%d]" % (100 * correct / total, correct, total))
# Training
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test() # A round of training , Test round
2. Result display

reference
《PyTorch Deep learning practice 》 Complete the collection _ Bili, Bili _bilibili
边栏推荐
- Libevent and multithreading
- QoS服务质量三DiffServ模型报文的标记及PHB
- Basic syntax of MySQL DDL and DML and DQL
- yocs_ velocity_ Smooth source code compilation
- 第二部分—C语言提高篇_3. 指针强化
- 上传图片base64
- 【Word】如何生成左侧的索引目录
- Traditional e-commerce dividends disappear, how to enter the new social e-commerce?
- avaScript的流程控制语句
- cookie_session
猜你喜欢

JMeter笔记2 | JMeter原理及测试计划要素

一日一书:机器学习及实践——从零开始通往kaggle竞赛之路

PyTorch 深度学习实践 第10讲/作业(Basic CNN)

Variables and data types (04) end

Traditional e-commerce dividends disappear, how to enter the new social e-commerce?

C language from entry to soil (I)

Redis 分片集群

Filter 过滤器

C language start

Practice of online problem feedback module (12): realize image deletion function
随机推荐
STM32 external interrupt (register version)
你就是你,没有人可以取代
【Tips】创建版本控制项目的简单方法
Cmake notes
chm文件打开时提示乱码
Use dichotomy to find specific values from the array
Don't compare with anyone, just be yourself
Processing tree structured data
上传excel文件
MySql的DDL和DML和DQL的基本语法
yocs_ velocity_ Smooth source code compilation
【LeetCode】11. 盛最多水的容器 - Go 语言题解
[waveform / signal generator] Based on stc1524k32s4 for C on Keil
Take you step by step to learn C (one)
自己的人生无须浪费在别人的标准中
vs2019配置运行open3d例子
2022-07-22 mysql/stonedb并行hashJoin内存占用分析
django.db.utils.OperationalError: (2002, “Can‘t connect to local MySQL server through socket ‘/var/r
C language to achieve three chess? Gobang? No, it's n-chess
第二部分—C语言提高篇_3. 指针强化