top | item 27202955

(no title)

slowstart | 4 years ago

I lead the Windows TCP team. We blogged about recent TCP advancements which is very relevant: https://techcommunity.microsoft.com/t5/networking-blog/algor...

discuss

order

SaveTheRbtz|4 years ago

A couple of questions:

* What are the reasons for disabling TCP timestamps by default? (If you can answer) will they be eventually enabled by default? (The reason I'm asking is that Linux uses TS field as storage for syncookies, and without it will drop WScale and SACK options greatly degrading Windows TCP perf in case of a synflood.[1])

* I've noticed "Pacing Profile : off" in the `netsh interface tcp show global` output. Is that the same as tcp pacing in fq qdisc[2]? (If you can answer) will it be eventually enabled by default?

[1] https://elixir.bootlin.com/linux/v5.13-rc2/source/net/ipv4/s... [2] https://man7.org/linux/man-pages/man8/tc-fq.8.html

slowstart|4 years ago

Windows historically defaulted to accepting timestamps when negotiated by the peer but didn't initiate the negotiation. There are benefits to timestamps and one downside (12 bytes overhead per packet). Re. syncookies, that's an interesting problem but under a severe syn attack, degraded performance is not going to be the biggest worry for the server. We might turn them on but for the other benefits, no committed plans. Re. pacing profile, no that's pacing implemented at the TCP layer itself (unlike fq disc) and is an experimental knob off by default.

drummer|4 years ago

I have a question, why is it that when opening two sockets on Windows and connecting them through TCP, there is about a 40% difference in transfer rate when sending from socket A to B, compared to sending from B to A?

slowstart|4 years ago

That's not expected. Are you using loopback sockets or are these sockets on different endpoints? Is this unidirectional or bidirectional traffic, i.e. are you doing both transfers from A to B and B to A simultaneously?

the8472|4 years ago

Are equivalents to linux' BQL/AQL, fq_codel, TCP_NOTSENT_LOWAT in the pipeline?

slowstart|4 years ago

I cannot comment on queuing disciplines and limits in future products. Re. TCP_NOTSENT_LOWAT, you may want to look at the Ideal Send Backlog API that allows an application to have just more than BDP queued to keep the performance at the maximum throughput while minimizing the amount of data queued: https://docs.microsoft.com/en-us/windows/win32/winsock/sio-i...