(no title)
aarmot | 1 year ago
The reality, of course, is somewhat different and muddy. For example, if you have network outage following later by a software crash or a reboot, then all the TCP buffer worth of data (several kilobytes or upto some megabytes - depends on your tuning) is mercilessly dropped. And your application thinks that just because you used TCP the data must have been reliably delieved. To combat this you have to implement some kind of serializing and acking - but the article scoffs us that we are too dumb to implement anything besides basic stream /s
I'm not arguing that TCP is useless, just that UDP has its place and we - mere mortals - can use is too. Where appropriate.
vitus|1 year ago
I will point out that "author of the article" is one of the core contributors in the IETF Media-over-QUIC working group (which is an effort to standardize how one might build these real-time applications over QUIC) and has been working in the real-time media protocols space for 10+ years.
The author recognizes that the title is clickbait, but the key point of the article is that you probably don't want to use raw UDP in most cases. Not that UDP is inherently bad (otherwise he wouldn't be working on improving QUIC).
ben0x539|1 year ago
Lerc|1 year ago
When you are using UDP the correct way to handle out of order delivery is to just ignore the older packet. It's old and consequently out of date.
Figuring out how to solve any mess caused by mistransmission necessarily has to be done at the application level because that's where the most up to date data is.