top | item 42789434

(no title)

morganherlocker | 1 year ago

Same, and I've found this works quite well with a C++ backend. Just send over binary data as struct-of-arrays, load it into typed arrays, render components. Your app will be bottlenecked only on bandwidth, as it should be, even for heavy 3d applications.

There's nothing fundamentally stopping react apps from being fast like this, but the dependency snowball has a way of hiding accumulated dynamic memory allocations and event listeners. Instead, we should be intentionally grouping allocations and events, keeping them contained to a few well understood pieces of code. Without this, these apps could be ported to native C++ or rust and they would still be nearly as slow because there is no magic language that can make a slop of millions of tiny dynamic allocations fast and low overhead.

discuss

order

kopirgan|1 year ago

C++ backend? Could you elaborate please?!

morganherlocker|1 year ago

If most data between the backend/frontend is raw numeric array binary blobs, and you target array types that can be loaded directly into JS TypedArrays (int32_t* -> Int32Array, float* -> Float32Array, etc.), you have a more or less optimal data transfer mechanism with zero dependencies. The JS parsing is faster than the fetching, even with 10s of millions of items, provided you keep the number of arrays fixed and reasonably small.

This is a natural fit for data-oriented programming in most systems languages, including C++. It's also compatible with numpy and any other dynamic language backends that have a performance-oriented array primitive or lib. The data can be served dynamically from memory, or written to files on a static server like nginx.