当前位置:网站首页>Some understanding of the rank sum of matrix and the rank of image
Some understanding of the rank sum of matrix and the rank of image
2022-07-24 15:51:00 【ChaoFeiLi】
Reference link :cv The paper (Low-rank relevant )_weixin_30790841 The blog of -CSDN Blog
Related papers
Chapter one :RASL: Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images , This is my contact with Low-rank The first article , Article utilization Low-rank The algorithm of image alignment (Alignment) At the same time, it can also effectively block , From the effect of the experiment , The alignment effect and de occlusion effect of the algorithm are still very good . However, this algorithm can only deal with batch images , Cannot process a single picture , This also limits its application scenarios , Let me briefly introduce the implementation process of this article .
First of all, I believe everyone is familiar with the concept of rank in the matrix , Suppose we are given the same person's 10 A picture , If this 10 Picture expression , light , The deflection angle of the face is exactly the same , And it is not interfered by any noise, and there is no occlusion . So take this 10 The pictures are pulled into columns to form a matrix , Ideally , The rank of this matrix should be 1, We can think of it as 10 The picture is perfectly aligned . If this 10 Only a few of the pictures are not aligned , It's covered , The position is offset or affected by light , So take this 10 A matrix of pictures put together , The rank of the matrix is certainly not 1. If this 10 A picture of a human face , The differences between them are very big , Then put them together to form a matrix , The rank may be full of paper . From this perspective , We can argue that low-rank It is a mathematical representation of image alignment . Due to the actual situation , Aligned pictures cannot be exactly the same , So it's impossible for 1, But we can relax the conditions , When the matrix composed of samples , Rank comparison is small , It can be considered that the effect of sample alignment is relatively good . This is the main mathematical idea of the article , It seems very simple, isn't it , But it is not so easy to realize , It involves a lot of mathematical operations .
Reference link : The rank of a matrix , Eigenvalues and eigenvectors Basic concept of matrix _TranSad The blog of -CSDN Blog _ Rank sum eigenvalue of matrix
The rank of a matrix , Eigenvalues and eigenvectors
1. The rank of a matrix
Put one m*n Take out the matrix of , You can get its row vector group ( Each line is a vector , common m Vector ) Or column vector group ( Each column is a vector , common n Vector ).
Let's take the row vector group to analyze :
If this m Vector , They are all linearly independent of each other , Then the rank of this matrix is m. In fact, that is : There are several linear independent , Then the rank of this matrix is . What is? “ Linear independence ”, for instance , vector [1,2,1] Sum vector [2,4,2] You can tell it at a glance *2 The relationship between , Then these two “ Current relevant ”; If it's a vector [1,2,1] Sum vector [1,5,2], Then these two matrices are linearly independent .
To judge a1,a2,a3 Whether it is linear or not , Can make k1a1+k2a2+k3a3=0, Judge whether by solving the equation a1,a2,a3 Linearly independent . if k1=k2=k3=0, Then linearly independent .
There's a rule here : Whether we take the row vector group or the column vector group of the matrix to calculate , The final rank is the same .
I understand it : The rank of a matrix is the number of uncorrelated vectors in the matrix .
2. Eigenvalues and eigenvectors of matrices
First of all, clear , Matrix times vector , It can stretch and rotate vectors .
For example, I now have a vector (1,1)
Now it is like this :

If you multiply a matrix by this vector (1,1), For example, multiply this matrix :
We can see that we will get the result (2,1), This is the effect drawn on the picture :

It can be found that we only use a diagonal matrix here , Realize an extension of the original vector on the abscissa . Think more about , In fact, if the upper right corner of the matrix is not 0, Or the lower left corner is not 0( Assume a positive value ), Will cause the original vector to rotate to the right or to the left . It can be concluded that : A matrix can act on a vector , Make it rotate or stretch .
Is there a kind of vector , After I use a matrix to act , Its direction remains the same ? Yes ! In this way, eigenvalues and eigenvectors can be derived :
For a matrix A, We look for a non-zero vector x, bring A After action x( namely Ax) And x parallel , And the size meets Ax=λx(λ It's a constant ). Once such existence is found , be x It's the eigenvector ,λ It's characteristic value .
Put it another way : We use matrices A Deactivation ( multiply ) All the vectors in the world . Where the matrix A Most of the applied vectors will be rotated and stretched , Only the matrix A The eigenvector of can be alone ( Keep the original direction )~
Actually , The eigenvector represents “ In one direction ”, It can represent the key information of the matrix . So such two beings with the same characteristics interact , Naturally, the direction will not change . And they “ Direction ” The more consistent ( The more fellow believers ~), The greater the effect ,λx The eigenvalue λ And the bigger , So eigenvalues λ The size of also represents the importance of this eigenvector ( Compared with other eigenvectors of this matrix ). I remember learning and contacting PCA When it comes to dimensionality reduction , The larger the eigenvalue, the more important the eigenvector , Because such vectors can better represent image features , That's what it means .
A matrix has multiple eigenvectors , We use feature space to represent all these . That is, the feature space contains all the feature vectors .
Let's look at the definitions of eigenvalues and eigenvectors
. Let's combine the understanding of matrix and spatial transformation , The effect of matrix on vector , It is equivalent to transforming the original space into a new space ; If we use the understanding that matrix is linear transformation ( Understand linear algebra visually ( One )—— What is a linear transformation ?), That is to say, the transformation of the original base .
Understand from the perspective of space , Multiplying a vector by a coefficient is actually scaling the vector ( Length or positive and negative direction changes , But still in the same straight line ), And matrix ( Matrix ) The function of is the transformation of space . If a matrix only scales a vector , Without changing the angle , Then this vector is called eigenvector , The scaling ratio is called eigenvalue .( Reference link :https://blog.csdn.net/qq_34099953/article/details/84291180)
Reference link : The rank of image and the rank of matrix _ Blog of deep blind learning -CSDN Blog _ Rank of image matrix
2、 Rank of image rank sum matrix
The rank of a matrix = The largest linearly independent row ( Or column ) Number of vectors . For images , Rank can indicate the richness of information contained in the image 、
Redundancy 、 noise .
The smaller the rank :
- The number of bases is small
- Large data redundancy
- Image information is not rich
- Less image noise
Low rank matrices
Concept
When the rank of the matrix is low (r << n, m), It can be regarded as a low rank matrix . Low rank matrix means , There are more rows in this matrix ( Or column ) It's linear correlation , namely : Information redundancy is large .
effect
- Using redundant information of low rank matrix , Missing data can be recovered , This problem is called “ Reconstruction of low rank matrix ” , namely :“ Suppose the recovered matrix is low rank , Use the existing matrix elements , Recover the missing elements in the matrix ”, It can be applied to image restoration 、 Collaborative filtering and other fields .
- In deep learning , Too many parameters of convolution kernel , There is often large redundancy , namely : Convolution kernel parameters are low rank . here , The convolution kernel can be decomposed with low rank , take k × k The convolution kernel of is decomposed into a k × 1 And a 1×k The core of , This can reduce the number of parameters 、 Speed up the calculation 、 Prevent over fitting .
Sparse coding
Why sparse coding ?
First , There is a hypothesis ( It's also true , It's a priori knowledge )—— The signal composition of nature is sparse . After sparse coding , Dimension reduction can be achieved , The role of improving computing efficiency .
Image and rank
problem : How to define an image or an object with low rank attribute ? Images with high rank attributes , If there is a missing phenomenon , How to recover , What methods exist to solve ?
answer : In image processing , The rank of an image can be simply understood as the richness of information contained in the image , Due to the similarity between local blocks 、 Repeatability , It often has the attribute of low rank ; The rank of the image is relatively high , It may be due to the influence of image noise , Through the limitation of low rank , It should be able to achieve a good denoising effect .
边栏推荐
- Varnish4.0 cache agent configuration
- 【LOJ3247】「USACO 2020.1 Platinum」Non-Decreasing Subsequences(DP,分治)
- Error 1053: the service did not respond to the start or control request in a timely fashion
- How to choose the appropriate data type for fields in MySQL?
- Withdrawal of IPO application, Yunzhou intelligent "tour" of unmanned boat enterprise fails to enter the science and Technology Innovation Board
- Hard core innovation that database needs to care about in the future
- Adaptive design and responsive design
- 未来数据库需要关心的硬核创新
- Memcache cache application (lnmp+memcache)
- [quantitative test]
猜你喜欢

MySQL之知识点(十二)

Read the paper with me - multi model text recognition network

How to deal with being attacked? Advanced anti DDoS IP protection strategy

C - partial keyword

Leetcode 220. 存在重复元素 III

20. Shell programming variables

Still using listview? Use animatedlist to make list elements move

Personal practical experience: Data Modeling "whether account data belongs to dimension or account domain"

【着色器实现Pixelate马赛克效果_Shader效果第七篇】

YOLO5Face:为什么要重新发明人脸检测器
随机推荐
【AdaptiveAvgPool3d】pytorch教程
Still using listview? Use animatedlist to make list elements move
[quantitative test]
Dynamics 365: how to get the threshold value of executemullerequest in batch requests
Azure key vault (1) Introduction
[Luogu] p1908 reverse sequence pair
JUC源码学习笔记3——AQS等待队列和CyclicBarrier,BlockingQueue
狗牙根植物介绍
【SWT】滚动容器实现商品列表样式
[adaptiveavgpool3d] pytorch tutorial
Will the capital market be optimistic about TCL's folding screen story?
Error 1053: the service did not respond to the start or control request in a timely fashion
Introduction to single chip microcomputer: LED bidirectional water lamp
IP protocol - network segment division
SQL row to column, column to row
Lsyncd set up synchronous image - use lsyncd to realize real-time synchronization between local and remote servers
Yolov3 trains its own data set
Kubernetes GPU's Dilemma and failure
C - partial keyword
2.19 haas506 2.0开发教程 - bluetooth - 蓝牙通信(仅支持2.2以上版本)