当前位置:网站首页>Machine learning SVM - Experimental Report
Machine learning SVM - Experimental Report
2022-06-26 11:05:00 【Obviously easy to prove】
Machine learning experiment report
- 〇、 Experimental report pdf It can be downloaded from this website
- One 、 The purpose and requirements of the experiment
- Two 、 Experiment content and method
- 3、 ... and 、 Experimental steps and processes
- 3.1 Compare the performance of existing classification algorithms in face recognition
- 3.2 Compare the learned face recognition methods
- Four 、 Experimental conclusion or experience
〇、 Experimental report pdf It can be downloaded from this website
Machine learning experiment 4 :SVM
This requires points to download ( Because the background of the experiment report checks the duplicate , It is not recommended to go whoring for nothing ).
I suggest you read the blog , There will be many experimental reports in the blog, which will be used in the future 【…】 Bold notes .
One 、 The purpose and requirements of the experiment
The experiment purpose
- master SVM Principle , be familiar with SVM( Including soft septum and nuclear techniques ) Optimization problem , And master the formula derivation process .
- Be able to use SVM Solve practical problems , Such as : Face recognition, etc .
- Have a general understanding of SVM At this stage, the optimization direction kernel is based on the classic SVM Optimization algorithm , And have their own understanding . On this basis , Try to give your own innovation points .
The experimental requirements
- Give the classic / Soft space / nucleus -SVM And deduce the optimization process , Achieve classic SVM Algorithm for image recognition ; In the two-dimensional plane, we give support vector An example of .
- use PCA、LDA Before algorithm extraction 10,20,30,…,160 The image features of dimension , And then use SVM For identification , And compare the recognition rate .
- Start another section and design an innovative SVM Algorithm , The content is briefly written in the experiment report , And with the classic SVM Compare .( The detailed contents can be written in another paper and submitted to “ Paper submission office ”.)
Two 、 Experiment content and method
2.1 SVM The basic concept of


2.2 classic SVM And its derivation
Solution idea :
The dual problem is used to solve SVM Optimization problem of basic type .
Solving steps :

2.3 Nuclear skills SVM And its derivation



2.4 Soft space SVM And its derivation



2.5 MATLAB Function to solve the above optimization problem

2.6 Example : classic SVM Classify two-dimensional plane and two-class problems

3、 ... and 、 Experimental steps and processes
3.1 Compare the performance of existing classification algorithms in face recognition
3.1.1 Experimental data set and training set 、 Partition of test set
ORL56_46 Face data set
This dataset has 40 personal , everyone 10 A picture . The pixel size of each picture is 56×46. In this experiment, each class of the data set is divided into 5 Zhang training collection ,5 Zhang test set , Use 40 Classes .AR Face data set
The database consists of 3 More than one database ;126 Facial frontal images of subjects 200 A color image . Each theme has 26 Different pictures . For each subject , These images are recorded in two different periods , Two weeks apart , Each period consists of 13 It's made up of images . All images are taken by the same camera under strictly controlled lighting and viewpoint conditions . Every image in the database is 768×576 Pixel size , Each pixel consists of 24 position RGB The color value indicates . In this experiment, each class of the data set is divided into 13 Zhang training collection ,13 Zhang test set , Before using 16 class .FERET Face data set
The data set consists of 200 people , Everyone 7 Zhang , Classified , grayscale ,80x80 Pixels . The first 1 This is a standard unchanged image , The first 2,5 This is an image of a large range of attitude changes , The first 3,4 This is an image of small amplitude attitude change . The first 7 This is an image of illumination change . In this experiment, each class of the data set is divided into 4 Zhang training collection ,3 Zhang test set , Use 200 class .
3.1.2 SVM Multiple classifiers
Because in face recognition , It is impossible to identify only two people . Theoretically SVM It solves the problem of two categories , So for the multi classification problem of face recognition , You need to consider multiple classifications SVM Method .
According to the reading materials : at present , structure SVM There are two main methods of multiple classifiers :
(1) direct method : Modify directly on the objective function , The parameter solutions of multiple classification surfaces are combined into an optimization problem , By solving the optimization problem “ Disposable ” Many classification . But the computational complexity is high .
(2) indirect method : Mainly through multiple two classifiers to achieve the construction of multiple classifiers , There are two common methods .
Here for SVM The construction of multiple classifiers is not enough .
Because there are many categories of face recognition , Therefore, I use the following method with relatively small amount of calculation “ One to many ”:
3.1.3 The experimental steps
1. PCA/LDA Algorithm +SVM Classifier for face recognition

2. PCA/LDA Algorithm + Nuclear method SVM Face recognition

3. PCA/LDA Algorithm + Soft space SVM Face recognition

3.1.4 experimental result
be aware :LDA The algorithm can only reduce the dimension to [1, Number of categories -1] In this range . Therefore, the target dimension should not be too high . And first PCA Reduce dimension to middle dimension ( The reconstruction threshold is 90%) Corresponding dimension . therefore LDA The final dimension should be lower than PCA The middle dimension of .
1. ORL Recognition rate and efficiency of data sets

2. AR Recognition rate and efficiency of data sets ( Middle dimension :33)

3. FERET Recognition rate and efficiency of data sets ( Middle dimension :75)

3.1.5 Result analysis
Sort out the above experimental results , Summarize and record the best situation , It is not directly adopted without any dimension reduction SVM Algorithm comparison ( Here SVM For classics SVM Algorithm ).
The table above shows : Data dimensionality reduction is important for SVM It is very necessary for classifiers . Yes SVM For classifiers , The time complexity of the operation depends on the dimension of the input data n, Different solutions have different time complexity , Common solutions are O(n)- O(n^3) Between . If the result of local feature extraction is directly used as input , that SVM The training time will be huge . In this experiment , After testing, it will 400 Zhang 112*92 Input your image to SVM Training in the classifier , The training time of a single iteration exceeds 2 Hours , This kind of violent training is very unreasonable . therefore , It is necessary to reduce the dimension of the image .
During the experiment, it can be found that : The recognition rate will reach a high value as the dimension increases , Then it will decrease with the increase of dimension , Then it tends to stabilize . The reason may be that there is some noise in the high-dimensional data , That would be right SVM The training of classifiers has a certain impact , This leads to a decline in the recognition rate . Of course , And that proves it SVM The classifier is not that the higher the dimension of the data, the more accurate the classification is , To train a very good effect SVM classifier , We also need to focus on the problem of target dimension .
The recognition rate of each data set is compared horizontally : For higher dimensional data , Soft space SVM Than hard interval SVM More advantages , Higher recognition rate .
3.2 Compare the learned face recognition methods
3.2.1 Experimental instructions
use PCA、LDA、LRC、PCA+SVM、LDA+SVM Comparison of five currently learned face recognition methods ( Recognition rate ). To avoid duplication of work , The following table refers to the data in the previous experimental report .
PCA: The target dimension is the reconstruction threshold 90% The corresponding dimension ;
LDA: The middle dimension is PCA Reconstruction threshold 90% The corresponding dimension , Finally, the dimension with the highest recognition rate is obtained by enumerating ;
LRC: The application of linear regression in face recognition ;
PCA+SVM: The recognition rate is the dimension range 10~160 The highest recognition rate ;
LDA+SVM: The recognition rate is the dimension range 10~(class-1) The highest recognition rate .
3.2.2 experimental result

3.2.3 Analysis of experimental results
As can be seen from the table above , The face recognition method with the highest recognition rate is PCA+SVM. We usually think that LDA It will be better than... In classification PCA, But with SVM In the process of cooperation , because LDA The finiteness of dimensionality reduction , The target dimension can only be [1,(class-1)] Between , This will lead to the loss of a large amount of information in the dimensionality reduction of samples with a small number of categories , about SVM It is undoubtedly a great negative impact .
Four 、 Experimental conclusion or experience
4.1 Problems encountered and solutions :
LDA Division model +SVM When , It needs to be done first PCA Dimension reduction , Proceed again LDA Dimension reduction . But when LDA When the final dimension is too high ,PCA The middle dimension will also become larger , This can lead to PCA Dimensionality reduction is not sufficient , Result in a division model Sb/Sw Matrices tend to be singular , It further creates the function quadprog Report errors .
During my experiment, the following problems occurred :
The solution is to LDA Subtractive model , There is no inverse operation in the subtraction model , So there's no mistake .
4.2 feel :
The code of this experiment feels very slow , In especial FERET Data sets , altogether 200 Classes , Very difficult to run , It's easy to jam . Then I feel the content of this experiment report is very substantial , Not only in the derivation of a large number of formulas , And the experimental code is particularly error prone , Especially in writing SVM The optimization problem . because SVM It is a constrained optimization problem , Generally, no solution is shown , So this needs to be very careful in the implementation process .
SVM Bukui is a unique and effective classification algorithm in the field of traditional machine learning , The idea of this algorithm is also very beautiful , Of course . In the paper I report , The author puts forward the present SVM There are two solutions , In class, there is more of the first kind : Use the dual problem to solve ; There is also a solution based on geometric interpretation .SVM The field is still very broad , The application scenarios are also very extensive , Here it is , Pay tribute to scientific research work !
边栏推荐
- MySQL模糊查询详解
- LeetCode 710 黑名单中的随机数[随机数] HERODING的LeetCode之路
- UDP Flood攻击防御原理
- Cereals Mall - Distributed Advanced
- Which PHP open source works deserve attention
- Oracle sqlplus 查询结果显示优化
- 工作汇报(2)
- 最强swarm集群一键部署+氢弹级容器管理工具介绍
- wangEditor 上传本地视频修改
- How does unity prevent other camera positions and rotations from being controlled by steamvrplugin when using steamvrplugin
猜你喜欢

4、 Stacks and queues

Linux下安装Mysql【详细】
![[software project management] sorting out knowledge points for final review](/img/13/823faa0607b88374820be3fce82ce7.png)
[software project management] sorting out knowledge points for final review

Implementing MySQL master-slave replication in docker

机器学习深度神经网络——实验报告

02 linked list of redis data structure

机器学习PCA——实验报告

DataBinding使用与原理分析
![[work details] March 18, 2020](/img/24/a72230daac08e7ec5bd57df08071f8.jpg)
[work details] March 18, 2020

JS take the date of the previous month 【 pit filling 】
随机推荐
ISO 26262之——2功能安全概念
Alibaba cloud OSS - object storage service (tool)
Expand and collapse too high div
24 个必须掌握的数据库面试问题!
[online simulation] Arduino uno PWM controls the speed of DC motor
Fabric. JS upper dash, middle dash (strikethrough), underline
appliedzkp zkevm(8)中的Plookup Table
【深度学习理论】(7) 长短时记忆网络 LSTM
How does unity prevent other camera positions and rotations from being controlled by steamvrplugin when using steamvrplugin
The difference between NPM and yarn
LeetCode 710 黑名单中的随机数[随机数] HERODING的LeetCode之路
Query online users and forced withdrawal users based on oauth2
MySQL 30 military regulations
laravel中使用group by分组并查询数量
Code specification & explain in detail the functions and uses of husky, prettier, eslint and lint staged
Adaptiveavgpool2d does not support onnx export. Customize a class to replace adaptiveavgpool2d
SQL Server foundation introduction collation
Mysql 30条军规
Progressive web application PWA is the future of application development
用同花顺手机炒股是安全的吗?如何用同花顺炒股