当前位置:网站首页>TCP BBR as rate based

TCP BBR as rate based

2022-06-25 06:28:00 dog250

The absolute majority cc All are cwnd-based,rate-based cc It has always been a theory .

BBR Belong to rate-based, it seems cwnd It is useless , But in reality ,cwnd Still as secondary control parameter Work :

A secondary parameter, cwnd_gain, bounds inflight to a small multiple of the BDP to handle common network and receiver pathologies (see the later section on Delayed and Stretched ACKs).

If refactoring BBR, Must forget cwnd, though Delayed & Stretched ACKs scene , You can also ignore cwnd, Completely according to the measured value delivery rate Calculation pacing rate:
 Insert picture description here
but BBR Always there cwnd constraint , From van Jacobson . There is no evidence or reason why cwnd Congestion control . Why not give up completely cwnd Well ?

Expand with packet conservation .

Get into TCP_CA_Recovery state ,BBR Start packet conservation , Limit cwnd It can be done . This is actually inherited Reno/CUBIC The strategy of ,BBR Claim to be insensitive to packet loss , But still sensitive . All in all , I thought BBR stay TCP_CA_Recovery The conservation of state packets adds complexity , There is no need to , And should not be so treated .

The opposite side , If you follow cwnd The meaning of constraints , It makes sense :

  • Non congestion packet loss ,10 rounds You will soon be able to restore cwnd, No harm to elegance .
  • There are new streams invading , insist 10 rounds The newly allocated data will be collected bandwidth.

But after careful deliberation, we found that it was wrong :

  • If a new stream intrudes and causes congestion , insist 10 rounds Too long , Packet loss will lead to higher retransmission rate .
  • 10 rounds after , New mining bandwidth May be cwnd Limitations result in rather than actual bandwidth acquisition .
  • Packet conservation will eventually restore, But the new stream intrusion scenario is not suitable for restore.

On the other side , In terms of packet conservation itself , It's not right either :

  • Packet conservation only constrains itself , For new stream intrusion scenarios , It makes no sense .
  • Packet conservation is based on inflight, be based on lost Count , It belongs to forecast count .

Make strategy based on prediction count , Is not workable :

  • forecast lost The count is more than actual ,inflight Will be small ,retrans There will be more , The retransmission rate is high .
  • forecast lost The count is less than actual ,inflight Will be bigger ,retrans It will be too slow , Too long recovery period .

Neither high nor low , Packet conservation cannot converge to the optimal solution , Spread to two bad things . Unless the forecast is accurate , And that's impossible .

But packet conservation works well , I think it just keeps a lower bound , Avoid making things worse . Try to give up cwnd Just because it doesn't work well doesn't mean it's not feasible , We may not have found the right way .

Take a look at the following .

If you really give up completely cwnd, Try to influence AQM:

  • If there is no packet loss , In accordance with delivery rate Calculation max-filtered bandwidth.
  • In case of packet loss , Then use multiplication to reduce max-filtered bandwidth After pacing rate.

Multiplication is reduced pacing rate Can be increased queue Between packets in gap And reduce the total number of packets , The impact is significant :

  • Increase the packet size gap,AQM The probability of losing this stream packet will be reduced .
  • Reduce the number of packets , Can promote queue emptying .

Packet loss without congestion noise ,max-filtered bandwidth Not yet slid away , It can continue to be used after packet loss recovery , If new traffic intrudes or sudden congestion occurs ,max-filtered bandwidth Always slip away , The period multiplication decreases pacing rate Make sure two things :

  • Data packets gap increase , Therefore, the current stream packet loss will not increase .
  • The total number of packets is reduced , Therefore, the current flow will not cause congestion .

Wait until the congestion is relieved , Reduced from multiplication rate Re probe, have a lot of AIMD It means ( Really? AIMD Need to use pacing rate and minRTT The conversion BDP Reuse currRTT Reverse calculation pacing rate), It also helps to treat the overall fairness .

When dealing with persistent congestion AQM Random packet loss , It is very important to spread the data packets of a single stream , This will reduce AQM Time of investigation gap Probability of packet loss when hit . All in all ,rate-based In order to avoid the aggravation of packet loss when congestion has occurred , Use cwnd Restrictions do not alleviate .

It must be emphasized that , Pure rate-based It's not good for everyone ,pacing Friendly to the Internet, but it all depends on CPU Precise modulation , It is not good for the host ,burst And pacing relative , Is a common means of host network optimization ,CPU Send a long message to the network card at one time and cut it into data packets ( Such as TSO), This effectively improves the host throughput, but it must cut multiple packets together burst issue , And pacing Contrary to . How many packets are tolerated at most burst, Is another kind of trade-off 了 .

Isn't the best case based on bit do pacing Do you ? At least if byte, But the network protocol PDU The unit is packet , It's needed everywhere trade-off.

BBR Retain cwnd control parameter The necessity of , From the beginning I had doubts , But I was speechless when communicating with people many times , I think this is my ignorance , But the restart problem is wrong , And so on . It already exists 30 Year of cwnd-based cc It's van Jacobson (Van Jacobson) Masterpiece , It was explained 30 year , It has become a paradigm , In any case, it is easy to find a reasonable reason to justify the necessity of its existence , It seems unconventional to abandon it . Write your own thoughts .

Zhejiang Wenzhou leather shoes wet , It's not fat when it's raining .

原网站

版权声明
本文为[dog250]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202201232225813.html