当前位置:网站首页>GPU calculation

GPU calculation

2022-06-25 08:24:00 Happy little yard farmer

GPU Calculation

1. GPU and CPU The difference between

Different design goals ,CPU Based on low latency ,GPU Based on high throughput .

  • CPU: Dealing with different data types , At the same time, logical judgment will introduce a lot of branch jump and interrupt processing
  • GPU: Processing types are highly uniform 、 Large scale data that are independent of each other , A clean computing environment that doesn't need to be interrupted

What type of program is suitable for GPU Up operation ?

  • Computationally intensive
  • Easy to parallel program

2. GPU Interpretation of the main parameters of

  1. Memory size : When the model is larger or the training batch is larger , The more video memory you need .
  2. FLOPs: Floating point operations per second ( Also known as Peak speed per second ) Is what runs every second floating-point Number of operations ( English :Floating-point operations per second; abbreviation :FLOPS) For short , Used to estimate Computer performance , Especially in the field of scientific computing that uses a large number of floating-point operations .
  3. Video bandwidth : The number of bits that can be transferred in a clock cycle ; The larger the number of bits, the larger the amount of data that can be transmitted instantaneously .

3. How to be in pytorch Use in GPU

  1. Model to cuda
  2. The data goes to cuda
  3. Output data to cuda, To numpy

 Insert picture description here

 Insert picture description here

If there are several available GPU: You can set dev="cuda:0" or dev="cuda:1". It should be noted that , If you use multiple cards for training and prediction , Some calculation results may be lost . There is GPU Under the condition of , You can try “ Single card for training , Multi card for prediction ”、“ Training with multi card 、 Single card for forecasting ” And so on .

4. The mainstream of the market GPU The choice of

Reference resources :https://www.bybusa.com/gpu-rank

https://zhuanlan.zhihu.com/p/61411536

http://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/

Use the host chassis configuration or ( cloud ) The server , Do not use notebooks .

Free entry :Colab,Kaggle(RTX 2070)

For different in-depth learning architectures ,GPU The priority of parameter selection is different , Generally speaking, there are two routes :

Convolutional networks and Transformer: Tensor core >FLOPs( Floating point operations per second )> Video bandwidth >16 Bit floating-point computing power

Cyclic neural network : Video bandwidth >16 Bit floating-point computing power > Tensor core >FLOPs

Welcome to my official account. 【SOTA Technology interconnection 】, I will share more dry goods .

 Insert picture description here

原网站

版权声明
本文为[Happy little yard farmer]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202200556381624.html