当前位置:网站首页>Graphmae ---- quick reading of papers

Graphmae ---- quick reading of papers

2022-06-24 07:59:00 WW935707936

target:

Namely , The author thinks that generative SSL In such as NLP And other fields have been mature applications (BERT,GPT).  however , stay graph in , Mostly by contrastive SSL instead of generative SSL.  because ,generative SSL There are still problems in the following aspects :(1)reconstruction objective, (2)training robustness, and (3)error metric. therefore , Put forward :a masked graph autoencoder GraphMAE.

In short , Namely : Reinvigorate graph Inside generative SSL.

To make a long story short , The whole article focuses on the following four points :

 

( I think , This article can be regarded as , stay GAE Under the overall ideological framework , It adds some useful training tips trick. These can actually be used in our own models .)

In recent years , Self supervised learning (SSL) It's been extensively studied . Especially generative Ssl It has achieved great success in naturallanguageprocessing and other fields , Such as BERT and GPT It is widely used . For all that , Contrast learning, which relies heavily on structural data expansion and complex training strategies, has always been a graphical problem SSL The dominant approach , And graph generation SSL The progress of the , Especially graphic coding (graphautoencoders,GAE), So far, the potential of other fields has not been reached . In this paper , We identified and studied the right GAE Problems that have a negative impact on development , Including its reconstruction objectives 、 Training robustness and error measurement . We propose an automatic mask encoder Graphmae, It can alleviate these problems , For generative self supervised graph learning . Instead of rebuilding the structure , We suggest focusing on feature reconstruction , Using both masking strategy and scaling cosine error , This is beneficial to the robust training of graphics . We aim at three different graphic learning tasks , stay 21 Extensive experiments were carried out on a public data set . It turns out that ,Graphmae—— A simple automatic graphic encoder , After our careful design , Can consistently produce better performance than comparing and generating the latest baseline . This study provides an understanding of the automatic graphic coder , It also shows the potential of generative self supervised learning in graphics .

原网站

版权声明
本文为[WW935707936]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/175/202206240423110121.html