当前位置:网站首页>Cat and dog classification vgg16 finetune
Cat and dog classification vgg16 finetune
2022-07-16 08:21:00 【booze-J】
article
The third method is called Finetune And the second method bottleneck Method It's kind of like .
The second method is characterized by the parameters of solidified convolution layer and pool layer , Then put the picture into convolution layer and pooling layer to calculate its result , Then take this result as the input of our own full connection layer , Train our own full connectivity layer , Finally, we get the result of this classification .
Finetune It also uses VGG16 stay imagenet Weights trained on , But in the process of training, it will also change the convolution layer and pool layer , It won't look like bottleneck Also solidify the convolution layer and pool layer in front , It is the whole network , Convolution layer , Pooling layer , The whole connection layer will make a fine adjustment , We can set this learning rate lower , Make a fine adjustment , Then slowly change the parameters of the whole network , Then it can get a better effect .
But this full connection layer still needs the full connection layer and vgg16 Convolution layer and pooling layer of , Another fine adjustment , So ? First use bottleneck Method Get a full connection layer , And again vgg16 A combination of convolution layer and pooling layer , Finally, make an overall fine adjustment .
Case code
The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .
1. Import third-party library
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from tensorflow.keras.optimizers import SGD
import os
2. Load the model and assemble
(1) load vgg16 Model ( Convolution pooling layer )
# Load pre trained VGG16 Model , Not including the full connection layer You need to set up a input_shape( Input shape )
vgg16_model = VGG16(weights='imagenet', include_top=False, input_shape=(150,150,3))
(2) Load the full connection layer of your training
# This full connection layer and [bottleneck Method ](https://blog.csdn.net/booze_/article/details/125702404?spm=1001.2014.3001.5502)
# The full connection layer in is the same
# Build a full connection layer
top_model = Sequential()
top_model.add(Flatten(input_shape=vgg16_model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(2, activation='softmax'))
# Load the trained weights
# Trained weights
top_model.load_weights('bottleneck_fc_model.h5')
'bottleneck_fc_model.h5' This weight comes from bottleneck Method
(3) Combination of network models
model = Sequential()
# Join first vgg16
model.add(vgg16_model)
# In the full connection layer of joining our own training
model.add(top_model)
3. Dataset processing
# Training set data generation
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# Test set data processing
test_datagen = ImageDataGenerator(rescale=1./255)
batch_size = 32
# Generate training data
train_generator = train_datagen.flow_from_directory(
'../input/cat-and-dog-classify2/image/train', # Training data path
target_size=(150, 150), # Set picture size
batch_size=batch_size # Batch size
)
# Test data
test_generator = test_datagen.flow_from_directory(
'../input/cat-and-dog-classify2/image/test', # Training data path
target_size=(150, 150), # Set picture size
batch_size=batch_size # Batch size
)
4. model training
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# Count the number of files
totalFileCount = sum([len(files) for root, dirs, files in os.walk('../input/cat-and-dog-classify2/image/train')])
model.fit_generator(
train_generator,
steps_per_epoch=totalFileCount/batch_size,
epochs=10,
validation_data=test_generator,
validation_steps=1000/batch_size,
)
Running results :
contrast bottleneck Method Results of operation , The training network is complex , There are many weight changes , So it takes a long time , But we can see that the accuracy of the model trained at the beginning is relatively high , the height is 90% many .
summary
Classification problem routine :
Use the trained weights , Then based on its weight , You remove its full connection layer , Add your own full connection layer , Then you retrain your classification task .
When you encounter an image classification problem , First you can use bottleneck Method Right , The existing convolution layer and pooling layer with good effect model are used to extract image features , Then only train the full connection layer , Use later Finetune Method Combine your full connection layer with the convolution layer and pooling layer of the existing model to fine tune the overall parameters , For better results .
边栏推荐
猜你喜欢
随机推荐
Profile - sessions_per_user :限制用户会话数
Suddenly announce the dissolution!
一位 sealer maintainer 的心路历程
[daily question 1] binary search tree and bidirectional linked list
Heterogeneous computing - heterogeneous chip convergence trend
【u-boot】u-boot Sandbox编译构建和使用总结
find 命令的参数详解
小程序毕设作品之微信企业公司小程序毕业设计(5)任务书
The source code is compiled according to mongoc
源码编译按照mongoc
mysql基础查询语句
最后一篇CSDN博客
分账系统如何给连锁便利店带来交易效率革命?
二叉树前中后序遍历
Small program graduation project of wechat enterprise company (5) assignment
LeetCode 735 行星碰撞[栈 模拟] HERODING的LeetCode之路
2022 云原生编程挑战赛启动!导师解析服务网格赛题
Wechat applet development - (VIII) canvas drawing graphics
C# 再述值类型
微信小程序页面的跳转和导航的配置和vant组件









