当前位置:网站首页>A review of small sample learning

A review of small sample learning

2022-06-25 05:03:00 MondayCat111


Small sample learning objectives : Learn from a small number of samples how to solve problems .
In this paper, small sample learning is divided into model-based tuning 、 Data based enhancements 、 There are three types of transfer based learning .

Small sample learning method based on model fine tuning

      Pre train the model on large-scale data , The simulation of neural network model on the target small sample data set Fully connected layer or Top layers Fine tune the parameters , Get the fine tuned model .
      Use the premise : The distribution of the target dataset is similar to that of the source dataset .
      Insufficient : The distribution of the target and source datasets is not similar , The model is over fitted on the target data set .

  1. Howard Wait in 2018 In, a general micro Tone language model (Universal Language Model Fine-tuning, ULMFit), Unlike other models , This method uses language model instead of deep neural network .
          The innovation of this model is to change the learning rate to fine tune the language model , It is mainly reflected in two aspects :1) The traditional method considers that the learning rate of each layer of the model is the same , and ULMFit The learning rate of each layer of the language model in is different . The bottom layer of the model represents universal characteristics sign , These features do not require much adjustment , So the learning rate is slow , The high-level features are more unique , It can better reflect the unique characteristics of tasks and data , Therefore, high-level features need to be learned at a higher learning rate
    The paper : Howard J, Ruder S. Universal language model fine-tuning for text classification . arXiv preprint arXiv:1801.06146, 2018.
  2. Nakamura Et al. Proposed a micro Modulation method , It mainly includes the following mechanisms :1) The process of retraining on small sample categories uses a lower learning rate ;2) Use an adaptive gradient optimizer in the tuning phase ;3) When When there is a large difference between the source data set and the target data set , This can be achieved by adjusting the entire network .
    The paper :Nakamura A, Harada T. Revisiting Fine-tuning for Few-shot Learning . 2019.

Small sample learning based on data enhancement

      Data enhancement refers to the use of auxiliary data or auxiliary information , Data expansion or feature enhancement of the original small sample data set . Data expansion is to add new data to the original data set , It can be unlabeled data or synthetic labeled data ; Feature enhancement is to add features convenient for classification to the feature space of the original sample , Increase the diversity of features .

Method based on unlabeled data

      The method based on unlabeled data is to expand the small sample data set by using unlabeled data , A common method is semi supervised learning [12] [13] And direct push learning [15] etc. .

  1. Semi-supervised learning
    Wang wait forsomeone [106] In half Under the thought of supervised learning , At the same time CNN The inspiration of portability , An additional unsupervised meta training stage is proposed , Expose multiple top-level cells to large amounts of dimensionless data in the real world . By encouraging these units to learn about low density separators in unlabeled data diverse sets, Capture a more general 、 A richer description of the visual world , Decouple these units from their association with a particular set of categories ( That is, it can not only represent a specific data set ).

    Boney wait forsomeone [14] stay 2018 Proposed to use MAML[45] Model for semi supervised learning , Using unlabeled data to adjust the parameters of the embedded function , Adjust the score with labeled data Arguments to the class

    Ren wait forsomeone [35]2018 In the prototype network [34] On the basis of improvement , Added unlabeled data , Higher accuracy has been achieved .

  2. Push learning
    Direct push learning assumes that unmarked data is test data , The goal is to get the best generalization ability on these unlabeled data .

    Liu wait forsomeone [16] Using direct push learning , stay 2019 The transducing transmission network was proposed in (Transductive Propagation Network) To solve the small sample problem . The transduction and transmission network is divided into four stages : Feature embedding 、 Graph construction 、 Label propagation and loss calculation .

    Hou wait forsomeone [113] A cross attention network is also proposed (Cross Attention Network), Based on the idea of direct push learning , The attention mechanism is used to generate information for each pair of class features and queries Sampling features as cross attention mapping , Highlight the target area , Make the extracted features more discriminative . secondly , This paper presents a transformation reasoning algorithm , In order to alleviate the excessive amount of data Few questions , Iteratively leverage unlabeled query sets to increase the support set , So as to make the category characteristics more representative .

Method based on data synthesis

      The method based on data synthesis is to synthesize new labeled data for small sample categories to expand the training data , The commonly used algorithm is to generate countermeasure network (Generative Adversarial Nets)[89] etc. .

  1. Mehrotra wait forsomeone [17] take GAN Apply to small sample learning , The generation of pairwise networks with countervailing residuals is proposed (Generative Adversarial Residual Pairwise Network) To solve the single sample learning problem .

  2. Hariharan wait forsomeone [92] A new method is proposed , The method is divided into two stages : Indicates the learning stage and the small sample learning stage , The representation learning phase refers to learning a general representation model on a source data set containing a large amount of data , The small sample learning phase refers to the fine-tuning of the model in new categories with a small amount of data , In this stage , This paper proposes a method to generate new data to enhance the data for small sample categories . The author thinks that there is a transformation between two samples belonging to the same category , So, given a sample of the new category x, With this transformation, you can generate G New samples belonging to this category can be generated .
     Insert picture description here

  3. Wang wait forsomeone [105] Combine meta learning with data generation , It is proposed to generate virtual data through data generation model to expand the diversity of samples , Combined with meta learning methods , The end-to-end method is used to train the generation model and classification algorithm . By changing some attributes and features of the existing image , Like light 、 posture 、 Location migration, etc , Migrate to a new sample , Thus, a new sample image with different changes is generated , Realize the expansion of pictures .
          The shortcomings of existing data generation methods :1. No complex data distribution is captured .2. Cannot generalize to small sample categories .3. The generated features are not interpretable .

  4. Xian wait forsomeone [94] To solve the above problems , The variational encoder (VAE) and GAN Combine , Integrated a new network f-VAEGAN-D2, This network completes the small sample learning image classification at the same time , The feature space of the generated samples can be expressed in the form of natural language , It's explicable .

  5. Chen wait forsomeone [104] Continue to study this , It is proposed that the image of training set can be interpolated to support set by using meta learning , Form an extended set of support sets .

Method based on feature enhancement

      The above two methods use auxiliary data to enhance the sample space , In addition, the sample diversity can be improved by enhancing the sample feature space , Because the key of small sample learning is how to get a generalization feature extractor .

  1. Dixit wait forsomeone [18] Put forward AGA(Attributed-Guided Augmentation)
  2. Schwartz wait forsomeone [19] Put forward Delta Encoder
  3. ,Shen wait forsomeone [103] It is proposed that the fixed attention mechanism can be replaced by the uncertain attention mechanism M

Small sample learning based on Transfer Learning

      Transfer learning refers to using old knowledge to learn new knowledge , The main goal is to quickly transfer the learned knowledge to a new field [21]. The stronger the correlation between the source domain and the target domain , Then the effect of transfer learning will be better [22]. In transfer learning , The data set is divided into three parts : training Lian Ji (training set)、 Support set (support set) And query sets (query set). The training set refers to the source data set , Generally, it contains a large amount of annotation data ; The support set refers to the target domain Training samples in the domain , Contains a small amount of dimension data ; The query set is the test sample in the target domain .

Measurement based learning method

  1. Koch wait forsomeone [30] stay 2015 The use of twin neural networks was first proposed in (Siamese Neural Networks) Single sample image recognition . Twin neural network is a model of similarity measurement , When the number of categories is large but the number of samples in each category is small, it can be used for category identification . Twin neural networks learn metrics from data , Then we use the learned metrics to compare and match the samples of unknown categories , Two twin neural networks share a set of parameters and weights , Its main idea is to map the input to the target space by embedding functions , Use a simple distance function to calculate the similarity . The twin neural network minimizes the loss of a pair of samples of the same category in the training phase , Maximize the loss of a pair of samples of different categories .
     Insert picture description here

  2. Vinyals wait forsomeone [31] Continue to conduct in-depth discussion on single sample learning , stay 2016 The matching network was proposed in (Matching Networks ), The network can map small sample data with labels and samples without labels to corresponding labels .

  3. It is necessary to take into account the classification labels of images , A multi attention network model is proposed (Multi-attention Network), The model uses GloVe Embedding Embed the label of the image into the vector space , By constructing the attention mechanism between tag semantic features and image features , Which part does the feature of an image belong to the label mainly focus on ( Single attention ) Or which parts ( Pay more attention ), Use the attention mechanism to update the vector of the image , Finally, the distance function is used to calculate the similarity to get the classification results .

  4. They are all aimed at single sample learning problems . In order to further solve the small sample problem ,Snell wait forsomeone [34] stay 2017 Prototype network proposed in (Prototypical Networks), The author thinks that every category has a prototype in vector space (Prototype), Also called category center point . The prototype network uses a deep neural network to map images into vectors , For samples belonging to the same category , The average value of this kind of sample vector is obtained as the prototype of this kind . Break training model and minimum loss function , Make the sample distance in the same category closer , Different types of samples are far away from , To update the parameters of the embedded function . The idea of the prototype network is shown in the figure 6 Shown , The input samples x, Compare x Vector of and Euclidean distance of prototype of each category  Insert picture description here

A meta learning based approach

  1. 2017 year Munkhdalai wait forsomeone [44] We continue to use the meta learning framework to solve the problem of single sample classification , A new model is proposed —— Meta network (Meta Networs). The metanetwork is mainly divided into two parts :base-learner and meta-learner, There is also an additional memory block to help the model learn quickly .

 Insert picture description here

  1. Finn etc. [45] stay 2017 A meta learning method for unknown models was proposed in ((Model-Agnostic Meta-Learning,MAML),MAML It is devoted to finding the parameters that are sensitive to each task in the neural network , By fine tuning these parameters, the loss function of the model converges quickly .

The method based on graph neural network

     Garcia wait forsomeone [50] stay 2018 Using graph convolution neural network to realize small sample image classification . In graph neural networks , Each sample is seen as a node in the graph , The model not only learns the embedding vector of each node , Also learn the embedding vector of each edge . Convolution neural network embeds all samples into vector space , Connect the sample vector with the label and input it into the graph neural network , Build edges between each node , Then the node vector is updated by graph convolution , Then the edge vector is updated by the node vector , This constitutes a deep graph neural network . Pictured 5 . Five different nodes are input to GNN in , According to the formula A Building edges , Then the node vector is updated by graph convolution , According to A Update edge , Then the final point vector is obtained by convolution of one layer graph , Finally, calculate the probability .
 Insert picture description here

expectation

1) At the data level , Try to use other prior knowledge to train the model , Or make better use of unlabeled data . In order to make the concept of small sample learning closer to reality , You can explore Cable does not rely on model pre training 、 Using prior knowledge ( For example, knowledge map ) Can achieve better results . Although the number of labeled samples in many fields is very small , but A large number of unlabeled data in the real world contain a lot of information , The direction of training models with unlabeled data is also worthy of further study .
2) Small sample learning based on transfer learning is faced with characteristics 、 Challenges of parameter and gradient migration . In order to better understand which features and parameters are suitable for migration , Need to increase the depth The explicability of learning ; In order to make the model converge quickly in new fields and new tasks , It is necessary to design a reasonable gradient migration algorithm .
3) For small sample learning based on metric learning , Propose a more effective neural network measurement method . The application of metric learning in small sample learning has been relatively mature , however The static measurement method based on distance function has less room for improvement , Using neural network to calculate sample similarity will become the mainstream of measurement methods in the future , So we need To design a better performance neural network measurement algorithm .
4) For small sample learning based on meta learning , Design better meta learners . Meta learning as a new method in the field of small sample learning , The current model is not mature enough , How to design meta learners to learn more or more effective meta knowledge , It will also be an important research direction in the future .
5) For small sample learning based on graph neural network , Explore more effective application methods . Figure neural network as a hot method in recent years , It has covered many collars Domain , And it can be explained 、 Good performance , However, there are few models used in small sample learning , How to design the network structure 、 Node update function and edge update function Aspects deserve further exploration .
6) Try the fusion of different small sample learning methods . The existing small sample learning models all use data enhancement or migration learning methods , In the future, we can try to combine the two Combine , Improve both the data and the model to achieve better results . meanwhile , In recent years, with active learning (active learning)[85] And strengthen Study (reinforcement learning)[86] The rise of the framework , Consider applying these advanced frameworks to small sample learning .

 Insert picture description here
References:
[1] Li XY, Long SP, Zhu J. Survey of few-shot learning based on deep neural network . Application Research of Computers: 1-8[2019-08-26].https://doi.org/10.19734/j.issn.1001-3695.2019.03.0036.
[2] Jankowski N, Duch W, Gra̧bczewski K. Meta-learning in computational intelligence [M]// Springer Science & Business Media. 2011: 97-115.
[3] Lake B, Salakhutdinov R. One-shot learning by inverting a compositional causal process [C]// Proc of International Conference on Neural Information Processing Systems. [S.l.]: Curran Associates Inc, 2013: 2526-2534.
[4] Li Fe-Fei et al. A bayesian approach to unsupervised one-shot learning of object categories. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference
[5] Feifei L, Fergus R, Perona P. One-shot learning of object categories . IEEE Trans Pattern Anal Mach Intell, 2006, 28(4):594-611.
[6] Fu Y, Xiang T, Jiang YG, et al. Recent Advances in Zero-Shot Recognition: Toward Data-Efficient Understanding of Visual Content . IEEE Signal Processing Magazine, 2018, 35(1):112-125.
[7] Wang YX, Girshick R, Hebert M, et al. Low-Shot Learning from Imaginary Data . 2018.
[8] Yang J, Liu YL. The Latest Advances in Face Recognition with Single Training Sample . Journal of Xihua University (Natural Science Edition), 2014, 33(04):1-5+10.
[9] Manning C. Foundations of Statistical Natural Language Processing [M]. 1999.
[10] Howard J, Ruder S. Universal language model fine-tuning for text classification . arXiv preprint arXiv:1801.06146, 2018.
[11] Tu EM, Yang J. A Review of Semi-Supervised Learning Theories and Recent Advances . Journal of Shanghai Jiaotong University, 2018, 52(10):1280-1291.
[12] Liu JW, Liu Y, Luo XL. Semi-supervised Learning Methods . Chinese Journal of Computers, 2015, 38(08):1592-1617.
[13] Chen WJ. Semi-supervised Learning Study Summary . Academic Exchange, 2011, 7(16):3887-3889.
[14] Rinu Boney & Alexander Ilin.Semi-Supervised Few-Shot Learning With MAMLl.2018
[15] Su FL, Xie QH, Huang QQ, et al. Semi-supervised method for attribute extraction based on transductive learning . Journal of Shandong University (Science Edition), 2016, 51(03):111-115.
[16] Liu Y, Lee J, Park M, et al. Learning To Propagate Labels: Transductive Propagation Network For Few-Shot Learning . 2018.
[17] Mehrotra A, Dukkipati A. Generative Adversarial Residual Pairwise Networks for One Shot Learning . 2017.
[18] Dixit M, Kwitt R, Niethammer M, et al. AGA: Attribute Guided Augmentation . 2016.
[19] Schwartz E, Karlinsky L, Shtok J, et al. Delta-encoder: an effective sample synthesis method for few-shot object recognition . 2018.
[20] Chen Z, Fu Y, Zhang Y, et al. Semantic feature augmentation in few-shot learning . arXiv preprint arXiv:1804.05298, 2018, 86: 89.
[21] Liu XP, Luan XD, Xie YX, et al. Transfer Learning Research and Algorithm Review . Journal of Changsha University, 2018, 32(05):33-36+41.
[22] Wang H. Research review on transfer learning . Academic Exchange, 2017(32):209-211.
[23] Wang YX, Hebert M. Learning to Learn: Model Regression Networks for Easy Small Sample Learning[C]// European Conference on Computer Vision. Springer International Publishing, 2016.
[24] Shen YY, Yan Y, Wang HZ. Recent advances on supervised distance metric learning algorithms. Acta Automatica Sinica, 2014, 40(12): 2673-2686
[25] Aurélien B, Amaury H, and Marc S. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.
[26] Kulis B. Metric Learning: A Survey . Foundations & Trends in Machine Learning, 2013, 5(4):287-364.
[27] Weinberger KQ. Distance Metric Learning for Large Margin nearest Neighbor Classification . JMLR, 2009, 10.
[28] Liu J, Yuan Q, Wu G, Yu X. Review of convolutional neural networks . Computer Era, 2018(11):19-23.
[29] Yang L, Wu YQ, Wang JL, Liu YL. Research on recurrent neural network . Computer Application, 2018, 38(S2):1-6+26.
[30] Koch G, Zemel R, Salakhutdinov R. Siamese neural networks for one-shot image recognition[C]//ICML deep learning workshop. 2015, 2.
[31] Vinyals O, Blundell C, Lillicrap T, et al. Matching Networks for One Shot Learning . 2016.
[32] Jiang LB, Zhou XL, Jiang FW, Che L. One-shot learning based on improved matching network . Systems Engineering and Electronics, 2019, 41(06):1210-1217.
[33] Wang P, Liu L, Shen C, et al. Multi-attention Network for One Shot Learning[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.
[34] Snell J, Swersky K, Zemel RS. Prototypical Networks for Few-shot Learning . 2017.
[35] ∗ ∗ Mengye Ren†, Eleni Triantafillou †, Sachin Ravi §, et al. META-LEARNING FOR SEMI-SUPERVISED FEW-SHOT CLASSIFICATION . 2018.
[36] Sung F, Yang Y, Zhang L, et al. Learning to Compare: Relation Network for Few-Shot Learning . 2017. [37] Zhang X, Sung F, Qiang Y, et al. Deep Comparison: Relation Columns for Few-Shot Learning . 2018.
[38] Hilliard N, Phillips L, Howland S, et al. Few-Shot Learning with Metric-Agnostic Conditional Embeddings . 2018.
[39] Thrun S, Pratt L. Learning to learn: introduction and overview [M]// Learning to Learn. 1998.
[40] Vilalta R, Drissi Y. A Perspective View and Survey of Meta-Learning . Artificial Intelligence Review, 2002, 18(2):77-95.
[41] Hochreiter S, Younger AS, Conwell PR. Learning To Learn Using Gradient Descent[C]// Proceedings of the International Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg, 2001.
[42] Santoro A, Bartunov S, Botvinick M, et al. One-shot Learning with Memory-Augmented Neural Networks . 2016.
[43] Graves A, Wayne G, Danihelka I. Neural Turing Machines . Computer Science, 2014.
[44] Munkhdalai T, Yu H. Meta Networks . 2017. [45] Finn C, Abbeel P, Levine S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks . 2017.
[46] Xiang J, Havaei M, Chartrand G, et al. On the Importance of Attention in Meta-Learning for Few-Shot Text Classification . 2018.
[47] Ravi S, Larochelle H. Optimization as a model for few-shot learning . 2016. [48] Yu M, Guo X, Yi J, et al. Diverse Few-Shot Text Classification with Multiple Metrics . 2018. [49] Zhou J, Cui G, Zhang Z, et al. Graph Neural Networks: A Review of Methods and Applications . 2018. [50] Garcia V, Bruna J. Few-Shot Learning with Graph Neural Networks . 2017. [51] Fort S. Gaussian Prototypical Networks for Few-Shot Learning on Omniglot. 2018. [52] Malalur P, Jaakkola T. Alignment Based Matching Networks for One-Shot Classification and Open-Set Recognition . 2019. [53] Yin C, Feng Z, Lin Y, et al. Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop[C]// Computer Vision & Pattern Recognition. 2016. [54] Frederic P. Miller, Agnes F. Vandome, John McBrewster. Amazon Mechanical Turk. alphascript publishing, 2011. [55] Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database[C]// IEEE Conference on Computer Vision & Pattern Recognition. 2009. [56] Geng R, Li B, Li Y, et al. Few-Shot Text Classification with Induction Network . 2019. [57] Han X, Zhu H, Yu P, et al. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation . 2018. [58] Long M, Zhu H, Wang J, et al. Unsupervised Domain Adaptation with Residual Transfer Networks . 2016. [59] Wang K, Liu BS. Research review on text classification . Data Communication,2019(03):37-47. [60] Huang AW, Xie K, Wen C, et al. Small sample face recognition algorithm based on transfer learning model . Journal of changjiang university (natural science edition), 2019, 16(07):88-94. [61] Lv YQ, Min WQ, Duan H, Jiang SQ. Few-shot Food Recognition Fusint Triplet Convolutional Neural Network with Relation Network .Computer Science,2020(01):1-8[2019-08-24].http://kns.cnki.net/kcms/detail/50.1075.TP.20190614.0950.002.html. [62] Upadhyay S , Faruqui M , Tur G , et al. (Almost) Zero-Shot Cross-Lingual Spoken Language Understanding[C]// ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. [63] Lampinen AK, Mcclelland JL. One-shot and few-shot learning of word embeddings . 2017. [64] Zhu WN, Ma Y. Comparison of small sample of local complications between femoral artery sheath removal and vascular closure device . Journal of modern integrated Chinese and western medicine, 2010, 19(14):1748+1820. [65] Liu JZ. Small Sample Bark Image Recognition Method Based on Convolutional Neural Network . Journal of northwest forestry university, 2019, 34(04):230-235. [66] Lin KZ, Bai JX, Li HT, Li W. Facial Expression Recognition with Small Samples Fused with Different Models under Deep Learning . Computer science and exploration:1-13[2019-08-24].http://kns.cnki.net/kcms/detail/11.5602.tp.20190710.1507.004.html.

[67] Liu JM, Meng YL, Wan XY. Cross-task dialog system based on small sample machine learning . Journal of Chongqing university of posts and telecommunications (natural science edition), 2019, 31(03):299-304. [68] Zhang J. Research and implementation of ear recognition based on few-shot learning [D]. Beijing University of Posts and Telecommunications, 2019. [69] Yan B, Zhou P, Yan L. Disease Identification of Small Sample Crop Based on Transfer Learning . Modern Agricultural Sciences and Technology, 2019(06):87-89. [70] Zhou TY, Zhao L. Research of handwritten Chinese character recognition model with small dataset based on neural network . Journal of Shandong university of technology (natural science edition), 2019, 33(03):69-74. [71] Zhao Y. Convolutional neural network based carotid plaque recognition over small sample size ultrasound images [D]. Huazhong University of Science and Technology, 2018. [72] Cheng L, Yuan Q, Wang Y, et al. A small sample exploratory study of autogenous bronchial basal cells for the treatment of chronic obstructive pulmonary disease . Chongqing medical:1-5[2019-08-27].http://kns.cnki.net/kcms/detail/50.1097.R.20190815.1557.002.html. [73] Li CK, Fang J, Wu N, et al. A road extraction method for high resolution remote sensing image with limited samples . Science of surveying and mapping:1-10[2019-08-27].http://kns.cnki.net/kcms/detail/11.4415.P.20190719.0927.002.html. [74] Chen L,Zhang F, Jiang S. Deep Forest Learning for Military Object Recognition under Small Training Set Condition . Journal of Chinese academy of electronics, 2019, 14(03):232-237. [75] Jia LZ, Qin RR, Chi RX, Wang JH. Evaluation research of nurse’s core competence based on a small sample . Medical higher vocational education and modern nursing, 2018, 1(06):340-342. [76] Wang X, Ma TM, Yang T, et al. Moisture quantitative analysis with small sample set of maize grain in filling stage based on near infrared spectroscopy . Journal of agricultural engineering, 2018, 34(13):203-210. [77] He XJ, Ma S, Wu YY, Jiang GR. E-Commerce Product Sales Forecast with Multi-Dimensional Index Integration Under Small Sample . Computer Engineering and Applications, 2019, 55(15):177-184. [78] Liu XP, Guo B, Cui DJ, et al. Q-Precentile Life Prediction Based on Bivariate Wiener Process for Gear Pump with Small Sample Size . China Mechanical Engineering:1-9[2019-08-27].http://kns.cnki.net/kcms/detail/42.1294.TH.20190722.1651.002.html. [79] Quan ZN, Lin JJ. Text-Independent Writer Identification Method Based on Chinese Handwriting of Small Samples . Journal of East China University of Science and Technology (natural science edition), 2018, 44(06):882-886. [80] Sun CW, Wen C, Xie K, He JB. Voiceprint recognition method of small sample based on deep migration model . Computer Engineering and Design, 2018, 39(12):3816-3822. [81] Sun YY, Jiang ZH, Dong W, et al. Image recognition of tea plant disease based on convolutional neural network and small samples . Jiangsu Journal of Agricultural Sciences, 2019, 35(01):48-55. [82] Hu ZP, He W, Wang M, et al. Deep subspace joined sparse representation for single sample face recognition . Journal of Yanshan University, 2018, 42(05):409-415. [83] Sun HW, Xie XF, Sun T, Zhang LJ. Threat assessment method of warships formation air defense based on DBN under the condition of small sample data missing . Systems Engineering and Electronics, 2019, 41(06):1300-1308. [84] Liu YF, Zhou Y, Liu X, et al. Wasserstein GAN-Based Small-Sample Augmentation for New-Generation Artificial Intelligence: A Case Study of Cancer-Staging Data in Biology.Engineering,2019,5(01):338-354. [85] Cohn DA, Ghahramani Z, Jordan MI. Active Learning with Statistical Models . Journal of Artificial Intelligence Research, 1996, 4(1):705-712. [86] Kaelbling LP, Littman ML, Moore AP. Reinforcement Learning: A Survey . Journal of Artificial Intelligence Research, 1996, 4:237-285. [87] Bailey K, Chopra S. Few-Shot Text Classification with Pre-Trained Word Embeddings and a Human in the Loop . 2018. [88] Yan L, Zheng Y, Cao J. Few-shot learning for short text classification . Multimedia Tools and Applications, 2018, 77(22):29799-29810. [89] Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]// International Conference on Neural Information Processing Systems. 2014. [90] Royle JA, Dorazio RM, Link WA. Analysis of Multinomial Models with Unknown Index Using Data Augmentation . Journal of Computational & Graphical Statistics, 2007, 16(1):67-85. [91] Koh P W, Liang P. Understanding Black-box Predictions via Influence Functions . 2017. [92] Hariharan B, Girshick R. Low-shot visual recognition by shrinking and hallucinating features. 2017. [93] Liu B, Wang X, Dixit M, et al. Feature space transfer for data augmentation. 2018 [94] Xian Y, Sharma S, Schiele B, et al. f-VAEGAN-D2: A feature generating framework for any-shot learning. 2019 [95] Li, W., Xu, J., Huo, J., Wang, L., Yang, G., & Luo, J. Distribution consistency based covariance metric networks for few-shot learning. 2019. [96] Li, W., Wang, L., Xu, J., Huo, J., Gao, Y., & Luo, J. Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning. 2019. [97] Gidaris, Spyros, and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. 2018. [98] Sun, Qianru, et al. Meta-transfer learning for few-shot learning. 2019. [99] Jamal, Muhammad Abdullah, and Guo-Jun Qi. Task Agnostic Meta-Learning for Few-Shot Learning. 2019. [100] Lee, Kwonjoon, et al. Meta-learning with differentiable convex optimization. 2019. [101] Wang, Xin, et al. TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning. 2019. [102] Kim J, Kim T, Kim S, et al. Edge-Labeling Graph Neural Network for Few-shot Learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 11-20. [103] Shen W, Shi Z, Sun J. Learning from Adversarial Features for Few-Shot Classification . 2019. [104] Chen Z, Fu Y, Kim YX, et al. Image Deformation Meta-Networks for One-Shot Learning . 2019.

[105] Wang YX, Girshick R, Hebert M, et al. Low-Shot Learning from Imaginary Data . 2018. [106] Wang YX, Hebert M. Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs . 2016. [107] Li H, Eigen D, Dodge S , et al. Finding Task-Relevant Features for Few-Shot Learning by Category Traversal. 2019. [108] Gidaris S, Komodakis N. Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning . 2019. [109] Liu XY, Su YT, Liu AA, et al. Learning to Customize and Combine Deep Learners for Few-Shot Learning . 2019. [110] Gao TY, Han X, Liu ZY, Sun MS. Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification . 2019. [111] Sun SL, Sun QF, Zhou K, Lv TC. Hierarchical Attention Prototypical Networks for Few-Shot Text Classification . 2019. [112] Nakamura A, Harada T. Revisiting Fine-tuning for Few-shot Learning . 2019. [113] Hou RB, Chang H, Ma BP, et al. Cross Attention Network for Few-shot Classification . 2019. [114] Jang YH, Lee HK, Hwang SJ, et al. Learning What and Where to Transfer . 2019.

原网站

版权声明
本文为[MondayCat111]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202210528019634.html