I had a similar problem when I was making a tool processing a lot of data in the browser. I'd naively made a large array of identical objects each holding a bunch of fields with numbers.
Turns out, this works completely fine in Firefox. However, in Chrome, it produces millions of individual HeapNumber allocations (why is that a thing??) in addition to the objects and uses GBs of RAM, and is slow to access, making the whole thing unusable.
Replacing it with a SoA structure using TypedArray made it fast in both browsers and fixed the memory overhead in Chrome.
As someone more familiar with systems programming than web, the concept of creating individual heap allocations for a single double baffles me beyond belief. What were they thinking?
Yeah, this is a historical design difference between Firefox's Spidermonkey JS engine and Chrome's V8.
Spidermonkey uses (I'm simplifying here, there are cases where this isn't true) a trick where all values are 64-bits, and for anything that isn't a double-precision float they smuggle it inside of the bits of a NaN. This means that you can store a double, a float32, an int, or an object pointer all in a field of the same size. Great, but creates some problems and complications for asm.js/wasm because you can't rely on all the bits of a NaN surviving a trip through the JS engine.
V8 instead allocates doubles on the heap. I forget the exact historical reason why they do this. IIRC they also do some fancy stuff with integers - if your integer is 31 bits or less it counts as a "smi" in that engine, or small int, and gets special performance treatment. So letting your integers get too big is also a performance trap, not just having double-precision numbers.
EDIT: I found something just now that suggests Smis are now 32-bits instead of 31-bits in 64-bit builds of v8, so that's cool!
> Eliminating per-element object overhead — This is the biggest win (~5-6x)
I feel like this phrasing might be easy to misinterpret. Without carefully considering the context, it seems like it's implying that objects have higher overhead than arrays, and my intuition is that this is true, but I'd argue that there's a potentially more relevant way of looking at things.
In the "object of arrays" layout, you have one object and three arrays, but in the "array of objects" layout, you have one array and N objects, where N is the size of the array. Even if the overhead of objects was the same as can array, you'd be looking at more overhead as soon as you went past three elements. In fact, even if the overhead of an object was lower than the overhead of an array, you'd still reach more overhead with the "array of objects" layout if you have enough elements to make up for the difference. With a TypedArray (or an array of a single type that's optimized by the runtime in the way described fairly early on in the article), you're not looking at an extra level of indirection per element like you would with an object.
I'd be curious to see what the results would be if they repeated the "array of objects" benchmark with an "array of arrays", where each element is an array of size 3. I could imagine them being quite similar, but I'm also not sure if there's even more nuance that I'm falling to account for (e.g. maybe the runtime would recognize that an array of N/3 elements each being an array of 3 numbers could be "flattened" to an underlying representation of array of size N in memory and perform the same optimizations.
I think the meta lesson here may be that intuition about performance of arrays in JavaScript might be pretty tricky. At least in terms of external semantics, they're supposed to be roughly equivalent to objects with numbers as keys, but in practice there are probably enough optimizations being done (like the implicit recognition of an array of numbers as described in the article) that I suspect that intuition might be a bit naive in a lot of cases, and it's probably better to verify what's actually happening at runtime rather than trying to guess. (The meta meta lesson is that this is true for a lot of things in pretty much every language, and it's sometimes going to be necessary to verify your assumptions about what performance will be, but I think that's something easy to fail to do even when you're aware of it being an easy trap, so having some general things to look out for like arrays potentially not being intuitive can still be helpful).
That Structure of Arrays performs much better than an Array of Structures is a very well known tidbit since long ago; what I'd love to read more is about how people design code around this fact.
For bigger codebases where you might not even be the person (or team) who designed the data types, and are just given a class or struct that needs to be collected into a large array, it's not simple to just decompose structures into an array per property.
In fact, this is a basic thing to want to do, but I've seen no language support for it ever. (in popular / industry frequently used langs). The `FancyList<Point>` idea from another comment is indeed interesting, but it would require reflection I guess.
Running the benchmark from the article on my laptop (M4 Macbook Air) had a few interesting results:
* when running the script with Node.js, the results are inline with the article (SoA is the fastest)
* Bun is slower than Node.js with both SoA and AoS.
* Bun has similar performance between SoA and AoS.
* in Bun, Interleaved is the fastest one by a significant margin. This is consistent through runs.
% bun bench.js
AoS: 924.54ms
SoA: 1148.57ms
Interleaved: 759.01ms
Bun's performance profile seems very different from Firefox and V8-based runtimes there. I wonder how QuickJS would fare. The article didn't mention the CPU used either, the performance difference may be dependent on the architecture as well.
> This test is a manufactured problem, a silly premise, false test cases and honestly dishonest if not ignorant
It’s amazing how vitriolicly wrong people can be. Before publicly criticizing someone in the above way, prove them wrong first. Don’t just assume they’re wrong.
andersa|1 month ago
Turns out, this works completely fine in Firefox. However, in Chrome, it produces millions of individual HeapNumber allocations (why is that a thing??) in addition to the objects and uses GBs of RAM, and is slow to access, making the whole thing unusable.
Replacing it with a SoA structure using TypedArray made it fast in both browsers and fixed the memory overhead in Chrome.
As someone more familiar with systems programming than web, the concept of creating individual heap allocations for a single double baffles me beyond belief. What were they thinking?
kg|1 month ago
Spidermonkey uses (I'm simplifying here, there are cases where this isn't true) a trick where all values are 64-bits, and for anything that isn't a double-precision float they smuggle it inside of the bits of a NaN. This means that you can store a double, a float32, an int, or an object pointer all in a field of the same size. Great, but creates some problems and complications for asm.js/wasm because you can't rely on all the bits of a NaN surviving a trip through the JS engine.
V8 instead allocates doubles on the heap. I forget the exact historical reason why they do this. IIRC they also do some fancy stuff with integers - if your integer is 31 bits or less it counts as a "smi" in that engine, or small int, and gets special performance treatment. So letting your integers get too big is also a performance trap, not just having double-precision numbers.
EDIT: I found something just now that suggests Smis are now 32-bits instead of 31-bits in 64-bit builds of v8, so that's cool!
Koffiepoeder|1 month ago
Firefox
Chrome Seems the interleaved being slower is consistent across browsers!gethly|1 month ago
Anyway, here's few videos of interest:
https://www.youtube.com/watch?v=WwkuAqObplU
https://www.youtube.com/watch?v=IroPQ150F6c
Odin also supports SoA natively https://odin-lang.org/docs/overview/#soa-data-types
saghm|1 month ago
I feel like this phrasing might be easy to misinterpret. Without carefully considering the context, it seems like it's implying that objects have higher overhead than arrays, and my intuition is that this is true, but I'd argue that there's a potentially more relevant way of looking at things.
In the "object of arrays" layout, you have one object and three arrays, but in the "array of objects" layout, you have one array and N objects, where N is the size of the array. Even if the overhead of objects was the same as can array, you'd be looking at more overhead as soon as you went past three elements. In fact, even if the overhead of an object was lower than the overhead of an array, you'd still reach more overhead with the "array of objects" layout if you have enough elements to make up for the difference. With a TypedArray (or an array of a single type that's optimized by the runtime in the way described fairly early on in the article), you're not looking at an extra level of indirection per element like you would with an object.
I'd be curious to see what the results would be if they repeated the "array of objects" benchmark with an "array of arrays", where each element is an array of size 3. I could imagine them being quite similar, but I'm also not sure if there's even more nuance that I'm falling to account for (e.g. maybe the runtime would recognize that an array of N/3 elements each being an array of 3 numbers could be "flattened" to an underlying representation of array of size N in memory and perform the same optimizations.
I think the meta lesson here may be that intuition about performance of arrays in JavaScript might be pretty tricky. At least in terms of external semantics, they're supposed to be roughly equivalent to objects with numbers as keys, but in practice there are probably enough optimizations being done (like the implicit recognition of an array of numbers as described in the article) that I suspect that intuition might be a bit naive in a lot of cases, and it's probably better to verify what's actually happening at runtime rather than trying to guess. (The meta meta lesson is that this is true for a lot of things in pretty much every language, and it's sometimes going to be necessary to verify your assumptions about what performance will be, but I think that's something easy to fail to do even when you're aware of it being an easy trap, so having some general things to look out for like arrays potentially not being intuitive can still be helpful).
mr_toad|1 month ago
Arrays are objects, so I’d be surprised if it made much difference.
j1elo|1 month ago
For bigger codebases where you might not even be the person (or team) who designed the data types, and are just given a class or struct that needs to be collected into a large array, it's not simple to just decompose structures into an array per property.
In fact, this is a basic thing to want to do, but I've seen no language support for it ever. (in popular / industry frequently used langs). The `FancyList<Point>` idea from another comment is indeed interesting, but it would require reflection I guess.
kemayo|1 month ago
That feels sufficiently intuitive that describing it as "a JavaScript performance issue" is a bit confusing.
(There's other optimizations they're applying, but that's the only one that really matters.)
gr4vityWall|1 month ago
* when running the script with Node.js, the results are inline with the article (SoA is the fastest)
* Bun is slower than Node.js with both SoA and AoS.
* Bun has similar performance between SoA and AoS.
* in Bun, Interleaved is the fastest one by a significant margin. This is consistent through runs.
% bun bench.js
AoS: 924.54ms
SoA: 1148.57ms
Interleaved: 759.01ms
Bun's performance profile seems very different from Firefox and V8-based runtimes there. I wonder how QuickJS would fare. The article didn't mention the CPU used either, the performance difference may be dependent on the architecture as well.
KolmogorovComp|1 month ago
ie `FancyList<Point>` would internally create a list for every field of `Point` and reconstruct appropriately when indexing FancyList.
anematode|1 month ago
I'm a bit rusty here, does V8 actually do auto-vectorization of JavaScript code these days?
adzm|1 month ago
jonny_eh|1 month ago
It’s amazing how vitriolicly wrong people can be. Before publicly criticizing someone in the above way, prove them wrong first. Don’t just assume they’re wrong.
cheevly|1 month ago