Flowdalic's comments

Flowdalic | 7 months ago | on: A month using XMPP (using Snikket) for every call and chat (2023)

> The standardization process comes into play when you think you have found a good solution, which should be adopted by THE standard respectively the ecosystem.

Na, the standardization process starts much earlier. Using the example of the IETF process, after which XMPP standardization process is largely modeled: standardization starts when you submit an I-D to IETF and/or approach an IETF WG.

> What matters is what the standard itself looks like, do you have a coherent specification which specifies the current way of doing things, including optional components? Or do you have a set of independent ways of doing it, because the standardization process doesn't actually decide what is the correct way of doing something (e.g. managing a group chat)

Well put and I totally agree (I think no one would have a reason to disagree with that statement).

Flowdalic | 7 months ago | on: A month using XMPP (using Snikket) for every call and chat (2023)

I am not sure if I would phrase it that way.

(Seemingly) conflicting extensions are another consequence of the loosely coupling between standardization and implementations. In addition, the emergence of several functionally overlapping extensions is stimulated by the freely accessible standardization process.

Especially in the early phase of an extension, you want to encourage experimentation with different approaches. Early selection would be disadvantageous.

Flowdalic | 7 months ago | on: A month using XMPP (using Snikket) for every call and chat (2023)

> Personal speculation but I blame the "everything is an extension" model - it was meant to reduce fragmentation and allow clients with different featuresets to interoperate

I could be wrong, but that reads like you suggest that there is an alternative to the "extension model".

However, any solution where standardization and implementations are independent entities, and thereby experience a sufficient degree of freedom, will have a trajectory to a situation where you have a robust core specification and optional extensions.

Think about protocols like SMTP and DNS—each has a foundational core that’s been expanded upon by numerous optional features.

Flowdalic | 3 years ago | on: Golang disables Nagle's Algorithm by default

> Even if the application is making 50 byte sends why aren't these getting coalesced once the socket's buffer is full?

Because maybe the 50 bytes are latency sensitive and need to be at the recipient as soon as possible?

> I understand that Nagle's algorithm will send the first couple packets "eagerly" […] Disabling Nagle's algorithm should be trading network usage for latency

No, Nagle's algorithm will delay outgoing TCP packets in the hope that more data will be provided to the TCP connection, that can be shoved into the delayed packet.

The issue here is not Go's default setting of TCP_NODELAY. There is an use case for TCP_NODELAY. Just like there is a use case for disabling TCP_NODELAY, i.e., Nagle's algorithm (see RFC 869). So any discussion about the default behavior appears to be pointless.

Instead, I believe the application or a underlying library is to blame. Because I don’t see why applications performing a bulk transfer of data by using “small” (a few bytes) write is anything but a bad design. Not writing large (e.g., page-sized) chunks of data into the file descriptor of the socket, especially when you know that there multiple more of this chunks are to come, just kills performance on multiple levels.

If I understand the situation the blog post describes correctly, then git-lfs is sending a large (50 MiB?) file in 50 bytes chunks. I suspect this is because git-lfs (or something between git-lfs and the Linux socket, e.g., a library) issues writes to the socket with 50 bytes of data from the file.

Flowdalic | 3 years ago | on: Golang disables Nagle's Algorithm by default

The problem does not seem to be that TCP_NODELAY is on, but that the packets are sent carry only 50 bytes of payload. If you send a large file, then I would expect that you invoke send() with page-sized buffers. This should give the TCP stack enough opportunity to fill the packets with an reasonable amount of payload, even in the absence of Nagel's algorithm. Or am I missing something?

Flowdalic | 3 years ago | on: Meson Build System 1.0

It should be possible to install meson via pip --user. Even though I prefer system-wide installations, I believe this weakens your argument for user defined functions in your situation.

Flowdalic | 3 years ago | on: IETF Draft: Centralization, Decentralization, and Internet Standards

QUIC does a lot more than "the one for TCP". While I also believe that modern TCP consists of more than just one RFC (which you already hinted at).

I guess the art in protocol design is to have as few as possible mandatory-to-implement parts, which are itself minimized in complexity, so that a minimal implementation is doable with a reasonable amount of effort while already achieving a good result (and UX). Then the optional parts can be added piece-by-piece, after the implementation was already published/released.

Flowdalic | 3 years ago | on: Zoom: Remote Code Execution with XMPP Stanza Smuggling

I get your confusion. But keep in mind that it is not only about just picking the library that shows as first result of your Google search. My naive self thinks that a million dollar company should do some research and evaluate different options when choosing external codebase to build their flagship product on. There a dozens of XMPP libraries, and they picked the one that does not seem to delegate XML and Unicode handling to other libraries, which should raise a flag.

Flowdalic | 3 years ago | on: Zoom: Remote Code Execution with XMPP Stanza Smuggling

> One of the harder things with XMPP is that it is a badly-formed document up until the connection is closed. You need a SAX-style/event-based parser to handle it.

That is a common misconception, although I am not sure of its origin. I know plenty of XMPP implementations that use an XML pull parser.

Flowdalic | 3 years ago | on: Zoom: Remote Code Execution with XMPP Stanza Smuggling

It appears that Gloox, a relative low-level XMPP-client C library, rolled much of its Unicode and XML parsing itself, which made such vulnerabilities more likely. There maybe good reasons to not re-use existing modules and rely on external libraries, especially if you target constraint low-end embedded devices, but you should always be aware of the drawbacks. And the Zoom client typically does not run on those.

Flowdalic | 4 years ago | on: XMPP, a comeback story

Using XML is one of XMPP its biggest strengths. XML is well designed, good documented and has a rich set of supporting libraries. XML documents can be composed of other XML documents in a sound fashion, which is a major feature for an extensible protocol as XMPP is, and XML documents compress well, making them suitable in low-bandwidth conditions (see [XEP-0365](https://xmpp.org/extensions/xep-0365.html)). I also never experienced a considerable battery drain when autark devices are using XMPP compared to a binary protocol.

Flowdalic | 4 years ago | on: XMPP, a comeback story

I wonder why people jump fast to conclusion that to "let XMPP die", when the protocol can also be iteratively improved. Presence is not required in XMPP, its an optional feature. Everything you said has been considered in newer XMPP extension protocols (like XEP-0396: MIX).

Flowdalic | 6 years ago | on: Ask HN: A New Decade. Any Predictions?

1. Concurrency platforms finally help to utilize multiple cores of a system, as result we will see many-core architectures with plenty simple cores

2. The consequences of quantitative easing will emerge and affect us all

3. Another cryptocurrency and bitcoin will form the backbone of a usable payment network with near-instant transactions for a low-fee

Flowdalic | 6 years ago | on: Movim 0.16 – A federated web-based social XMPP client

> I think it started to die when Google decided the XMPP spec was not good enough for them and deviated from it

It was my impression that Google dropped XMPP support not because of the spec being "not good", but because they saw now advantage allowing the federation, since nearly nobody else federated with them, while it comes with a cost.

The XMPP specification is open and in large parts malleable, nothing would have stopped Google from participating in improving it.

page 1