当前位置:网站首页>Google | ICML 2022: sparse training status in deep reinforcement learning

Google | ICML 2022: sparse training status in deep reinforcement learning

2022-06-22 19:53:00 Zhiyuan community

【 title 】The State of Sparse Training in Deep Reinforcement Learning

【 The author team 】Laura Graesser, Utku Evci, Erich Elsen, Pablo Samuel Castro

【 Date of publication 】2022.6.17

【 Thesis link 】https://arxiv.org/pdf/2206.10369.pdf

【 Recommended reasons 】 In recent years , The use of sparse neural networks in various fields of deep learning is growing rapidly , Especially in the field of computer vision . The attraction of sparse neural networks is mainly due to the reduction of the number of parameters required for training and storage , And the improvement of learning efficiency . It's kind of surprising , Few people try to explore them in deep reinforcement learning (DRL) Application in . In this work , The author's team has systematically investigated the application of some existing sparse training techniques in various deep reinforcement learning agents and environments . The final results of the investigation confirm the results of sparse training in the field of computer vision —— In the field of deep reinforcement learning , For the same parameter count , The performance of sparse network is better than that of dense network . The author's team analyzed in detail how various components of deep reinforcement learning are affected by the use of sparse networks , And by proposing promising ways to improve the effectiveness of sparse training methods and promote their use in deep reinforcement learning .

 

原网站

版权声明
本文为[Zhiyuan community]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206221832178535.html