This seems to be the dataloader pattern. There are implementations in many languages, but the idea is that you have a bunch of threads which declare their I/O needs, and then you 1) debounce and merge the requests (uniform access) and 2) cache the results so that later in the graph of calls you don’t need to fetch already loaded data.
Yup! It plays quite nicely with graphql, where you are "resolving" rather than "rendering". It's also a nice place to add any caching logic, because a data loader is essentially saying "give me IDs for as many objects of type X, and I will batch load them for you".
I see all these comments stating 'oh ORMs are bad' and 'just write some SQL'. Yes, you should probably not be afraid of SQL, and yes, using an ORM for everything is probably not great, and no, ORMs aren't a full replacement for writing SQL occasionally, but taking an extremist pro-SQL point-of-view is not doing any favors to this debate.
There are very real reasons why writing raw SQL is a pain. You can make arguments about typesafety of queries, maintainability, mapping into application level data structures/types yadayada. Imho the primary argument is that you cannot efficiently write a query that doesn't suffer from duplication of data. If you have an entity A that has many B's, and that same entity A also has many C's, then if you join on both you now are loading A * B * C rows. This is going to be slow too, and is difficult to detangle.
ORMs in their current form are terrible at tackling this problem, and people are right to call them out for that. "Just" lazy loading or "just" eager loading both do not scale, nor does doing firing event listeners on a per-entity basis. But rawdogging SQL is not a viable solution either, and I wish people advocating that would stop this attitude of 'oh you prissy dev using your puny ORM, not willing to write a line of SQL, look at me with my glorious hair on my chest and superior intellect writing pristine queries'.
This will return A number of (author) rows, where each author row as a `_b` column that is a JSON array of that author's books. And if you asked for C comment rows, then one of the "columns"/indices in each of those book arrays will itself be a nested array of the book's comments.
So all the data gets returned, but with no repetition/Cartesian product.
Hat tip to PostGraphile where I first saw the "cross join later" + "json_agg" technique used. Disclaimer we're not actually running this Joist feature in production yet--just haven't had time to run it through the test suite.
The fundamental problem is with the wire protocol. It’s inherently tabular, but should actually be more like a binary JSON format, with hierarchical representation for joins to avoid repeating the data.
Better yet, the wire formats should have efficient representations for things like column store compression.
Depending on what you are doing there might be some duplication that you could remove by creating hash lookups as in this post, but I would reach for Jet first.
Some of us just aren't smart enough for sql. I'm perpetually running into the situation where I want to join one table with another that has multiple rows. Like a blog post with tags. Exactly like this: https://stackoverflow.com/questions/8201462/join-with-anothe...
For which the answer is oh, just use GROUP_CONCAT, which isn't even SQL. And I've still got to fix it up by running split when I get it back. Nor does it work particularly well if you're joining against something that isn't a single string column.
So I just bang the rocks together like a caveman and write a second query to go back and fetch all the tags, then loop around assign them to the matching posts.
Or else, if the post is talking about a "public-facing API resource", can someone tell me why the API wouldn't implement querying for multiple of the same record type at once? It just seems to me that choosing between getting 1 owner, and "get ALL owners" (as TFA puts it), is like a law of the excluded middle
That was my first thought as well. Devs will do anything to avoid learning SQL.
You’re spending 20x the code and probably 1000x the time coming up with this nonsense, and for what – so you can say there isn’t any raw SQL in your code base? It’s as ridiculous as denouncing someone for occasionally dropping into Assembly for a specific purpose, or writing a bit of C to call from Python, etc.
For detecting rather than preventing duplicates, I'm fond of this pattern in Go:
At "entry" to your code, add a mutable "X was called" tracker to your context. Anywhere you want to track duplicates, insert/increment something in that tracker in the context. And then when you exit, log the duplicates (at the place where you put the tracker in).
It's reasonably implicit, works for both tracking and implicitly deduplicating (turn it into a cache rather than a counter and voila, lazy deduplication*), and it's the sort of thing that all your middle layers of code don't need to know anything about. As long as they forward contexts correctly, which you REALLY want to do all the time anyway, it Just Works™.
*: obviously you can go overboard with this, there are read-after-write concerns in many cases, etc which this article's "prevent by design" structure generally handles better by making the phases of behavior explicit. but when it works, it's quite easy.
If you're using Go, sqlboiler can do this for you in most common cases (e.g., fetch all the Users matching this filter, and for each User, fetch the related Company)
I've always been sort of fond of 1 + 1. It's too often the case that there's a popular query that doesn't even need the child data to function, and unless you have some elaborate caching mechanism it would be a shame to pay the full cost of the join or however you want to implement it.
Making one query that returns the base data and a second that pulls all of the associated data works often enough.
Then it's only when you need to pull M individual records and the associated data that might put you into M + 1 queries, if you can't work out client side grouping for some esoteric reason. But you've reduced the exponent of the fanout by 1, which can hold you for a long time. Years even.
To work around N+1 is to write all your database layer functions in form of getFoosByIds(id[]) instead of getFooById(id). This allows you to easily compose the loads when you have subresources. It's similar to what the author is doing, but does not tear apart the subresources from the parent object.
Pushing the subresource fetching down to the database requires using JOINs and fails badly when you have multiple one-to-many relations in one fetch.
This works even in adhoc loops, i.e. if you have a lifecycle hook** of "after an author changes, do x/y/z logic", and you update 100 authors, every SQL operation invoked by those ~100 individual hooks is auto-batched.
We've been running this in production for ~4 years at this point, and haven't had an N+1 since then (although we didn't initially support auto-batch find queries; that came later).
Of course kudos to dataloader.
*everything --> any queries our "find" API supports, which doesn't do aggregates, sums, havings, etc.
**lifecycle hooks --> yes, a blessing and a curse; we're always attempting to find better/higher-level abstractions for declaring the intent of business logic, than raw/imperative hooks.
Besides pitching Joist, going through OP, I'm not following how verbose the nested examples would get, i.e. ProductLoadBundle is loading "products & widgets".
But what if I need "products & widgets & widget orders". And like sometimes I want "just products & widgets" (2 levels) and sometimes I want "products & widgets & widget orders (3 levels)"?
Would these be the same "ProductLoadBundle" with some conditionals to the Product.LoadBundle method? Would the result of those conditionals be seen in the type-safe as, i.e. sometimes the 3rd level is available, sometimes it is not?
Or would it be two separate bundles, a 2-level Product.LoadTwoLevelBundle and a 3-level Product.LoadWidgetsAndOrdersBundle, which has the pro of better type-safety, but a con of repeating the bundle boilerplate for each unique shape/tree of data your app needs to load.
My guess/hope is that it's the 2nd, if only because I assume brandur also values type-safety over verbosity.
It took me awhile to find again, but this OG scala library in theory handled these adhoc shapes (called "enrichments" in their readme) somewhat elegantly:
Mea culpa another Joist pitch, but TypeScript makes this "give me an adhoc type based on the call-site specific tree of data" super-pleasant, i.e. Loaded<Product, "widgets"> vs. Loaded<Product, { widgets: "orders" }> gets you the two different 2-level vs. 3-level types.
...granted, you just have to accept non-Go/non-native levels of performance., but tradeoffs. :-)
I'm very grateful to this post for introducing me to sliceutils to create a map from a slice. I think that's a very elegant way to create nested models given a parent and child struct.
The problem is not in Go's endless verbosity. The problem is the basic concept of ORMs - that the model you need for efficiently rendering data is the same model that you need for efficiently storing data. ORMs map them 1:1 and that's what results in N+1 queries and all the other problems.
Go's endless verbosity and lack of dynamic features is a blessing, not a curse. Because you have to write your own data access layer, you can break this blatantly false assumption that what you need in the UI is the same as what you need in the database.
To break the N+1 problem, you can do funky stuff like pull back sub-components as array fields in the main component, or pull multiple result sets, or concatenate the sub-component data into a single text field, whatever your UI requires. Because you're not forcing the databases query to map 1:1 with whatever structs you've got inside your application, you can get creative with how you structure the database storage, and how you query that storage. You can create funcs in the database that do crazy shit with the data to return exactly what the UI requires in a custom struct. Because it all runs on the database itself it's fast and painless. Because you have to manually map all the structs in your application yourself (thanks Go!) then you are not constrained to mirror the database structure in your code. Your UI code does what it needs, your database does what it needs, and you can map between them however you want.
ActiveRecord is easy to use, and very concise, which is what it optimises for. Go is not optimising for the same thing. The author is trying to recreate ActiveRecord in Go without realising that ActiveRecord is a straightjacket, and obviously struggling. If you free yourself from those constraints, the world becomes a simpler, better, place.
I think the N+1 problem is overblown. The number of database calls will scale with the volume of data retrieved, but the volume is data retrieved should always be small.
I think it’s underblown. If more people were properly horrified about how much more needless work it causes the DB server, the network, and the client to do, they’d burn it with fire.
In my experience it’s the biggest culprit behind random system fallovers that “work on my machine”. Well sure. Your test page load only resulted in 8 round trips to the server with a hidden O(n^2) in-process merge. The same page on the fully populated prod server required 137 round trips, and it still fails under load even though you can’t physically fit more RAM in the thing.
Agree the DB side is overstated. These particularly suck though in ruby on multithreaded servers (eg puma), where N+1s are constantly giving control of the GVL to another potentially greedy thread. I’m sure python multithreaded servers will suffer similarly.
Oh it definitely isn't. Data volume may be small, but latency will kill you, especially if your database server isn't on the same machine as your application. If you fetch an entity A with a one-to-many to entity B, with a typical size of 100 B's for every A, you are looking at 101 separate queries if you access any relations on B, which assuming 1ms of network latency give you a lowerbound of 100ms for an operation that may have taken well below 10ms in total had you fetched in one go.
> The number of database calls will scale with the volume of data retrieved, but the volume is data retrieved should always be small.
Isn't that the point? Repeated network calls will cause latency even if the data is minimal; the total amount of data returned will always be the same, so the "N" extra network calls are pure overhead. For applications where the amount of data is usually small, network calls will likely be the bottleneck.
ORMs are such a toxic idea, we'd be better off if we could un-invent them. Look how thoroughly this poor author's mind has been poisoned by long-term exposure.
Please get out there and write some SQL. I promise it won't hurt you.
dfee|1 year ago
Here’s one impl: https://github.com/graphql/dataloader
Ozzie_osman|1 year ago
superice|1 year ago
There are very real reasons why writing raw SQL is a pain. You can make arguments about typesafety of queries, maintainability, mapping into application level data structures/types yadayada. Imho the primary argument is that you cannot efficiently write a query that doesn't suffer from duplication of data. If you have an entity A that has many B's, and that same entity A also has many C's, then if you join on both you now are loading A * B * C rows. This is going to be slow too, and is difficult to detangle.
ORMs in their current form are terrible at tackling this problem, and people are right to call them out for that. "Just" lazy loading or "just" eager loading both do not scale, nor does doing firing event listeners on a per-entity basis. But rawdogging SQL is not a viable solution either, and I wish people advocating that would stop this attitude of 'oh you prissy dev using your puny ORM, not willing to write a line of SQL, look at me with my glorious hair on my chest and superior intellect writing pristine queries'.
stephen|1 year ago
> then if you join on both you now are loading A * B * C rows
I'm realizing it's not actually written up in our proper docs, but the "cross join lateral" trick is pretty neat:
https://joist-orm.io/blog/nextjs-sample-app#join-based-prelo...
This will return A number of (author) rows, where each author row as a `_b` column that is a JSON array of that author's books. And if you asked for C comment rows, then one of the "columns"/indices in each of those book arrays will itself be a nested array of the book's comments.
So all the data gets returned, but with no repetition/Cartesian product.
Hat tip to PostGraphile where I first saw the "cross join later" + "json_agg" technique used. Disclaimer we're not actually running this Joist feature in production yet--just haven't had time to run it through the test suite.
jiggawatts|1 year ago
Better yet, the wire formats should have efficient representations for things like column store compression.
Tarean|1 year ago
- Eager loading the data efficiently is annoying, most ORM's can do batched select-in loading now
- Detecting changes with the DB is annoying if you update more than a single row, requires change tracking or a diff
- Saving changes is a topo sort so you don't delete while foreign keys exists. Also, gotta wait for a foreign key to exist when a new row is inserted
But just let me tell the ORM a spanning tree of the schema and load that! Why do all ORM's default to cyclic object graphs and lazy loading.
gregwebs|1 year ago
Depending on what you are doing there might be some duplication that you could remove by creating hash lookups as in this post, but I would reach for Jet first.
sqlc supports embedding but not embedded slices?
sbstp|1 year ago
andrscyv|1 year ago
tedunangst|1 year ago
For which the answer is oh, just use GROUP_CONCAT, which isn't even SQL. And I've still got to fix it up by running split when I get it back. Nor does it work particularly well if you're joining against something that isn't a single string column.
So I just bang the rocks together like a caveman and write a second query to go back and fetch all the tags, then loop around assign them to the matching posts.
lainga|1 year ago
sgarland|1 year ago
You’re spending 20x the code and probably 1000x the time coming up with this nonsense, and for what – so you can say there isn’t any raw SQL in your code base? It’s as ridiculous as denouncing someone for occasionally dropping into Assembly for a specific purpose, or writing a bit of C to call from Python, etc.
unknown|1 year ago
[deleted]
Groxx|1 year ago
At "entry" to your code, add a mutable "X was called" tracker to your context. Anywhere you want to track duplicates, insert/increment something in that tracker in the context. And then when you exit, log the duplicates (at the place where you put the tracker in).
It's reasonably implicit, works for both tracking and implicitly deduplicating (turn it into a cache rather than a counter and voila, lazy deduplication*), and it's the sort of thing that all your middle layers of code don't need to know anything about. As long as they forward contexts correctly, which you REALLY want to do all the time anyway, it Just Works™.
*: obviously you can go overboard with this, there are read-after-write concerns in many cases, etc which this article's "prevent by design" structure generally handles better by making the phases of behavior explicit. but when it works, it's quite easy.
peter_l_downs|1 year ago
https://github.com/volatiletech/sqlboiler
hinkley|1 year ago
Making one query that returns the base data and a second that pulls all of the associated data works often enough.
Then it's only when you need to pull M individual records and the associated data that might put you into M + 1 queries, if you can't work out client side grouping for some esoteric reason. But you've reduced the exponent of the fanout by 1, which can hold you for a long time. Years even.
dventimi|1 year ago
janci|1 year ago
Pushing the subresource fetching down to the database requires using JOINs and fails badly when you have multiple one-to-many relations in one fetch.
Just do a single, separate query per table.
stephen|1 year ago
If he can give up Go, we've got a TypeScript ORM that will de-N+1 basically everything* that is not a paginated/limit-offset query:
https://joist-orm.io/docs/goals/avoiding-n-plus-1s
This works even in adhoc loops, i.e. if you have a lifecycle hook** of "after an author changes, do x/y/z logic", and you update 100 authors, every SQL operation invoked by those ~100 individual hooks is auto-batched.
We've been running this in production for ~4 years at this point, and haven't had an N+1 since then (although we didn't initially support auto-batch find queries; that came later).
Of course kudos to dataloader.
*everything --> any queries our "find" API supports, which doesn't do aggregates, sums, havings, etc.
**lifecycle hooks --> yes, a blessing and a curse; we're always attempting to find better/higher-level abstractions for declaring the intent of business logic, than raw/imperative hooks.
stephen|1 year ago
But what if I need "products & widgets & widget orders". And like sometimes I want "just products & widgets" (2 levels) and sometimes I want "products & widgets & widget orders (3 levels)"?
Would these be the same "ProductLoadBundle" with some conditionals to the Product.LoadBundle method? Would the result of those conditionals be seen in the type-safe as, i.e. sometimes the 3rd level is available, sometimes it is not?
Or would it be two separate bundles, a 2-level Product.LoadTwoLevelBundle and a 3-level Product.LoadWidgetsAndOrdersBundle, which has the pro of better type-safety, but a con of repeating the bundle boilerplate for each unique shape/tree of data your app needs to load.
My guess/hope is that it's the 2nd, if only because I assume brandur also values type-safety over verbosity.
It took me awhile to find again, but this OG scala library in theory handled these adhoc shapes (called "enrichments" in their readme) somewhat elegantly:
https://index.scala-lang.org/getclump/clump
Mea culpa another Joist pitch, but TypeScript makes this "give me an adhoc type based on the call-site specific tree of data" super-pleasant, i.e. Loaded<Product, "widgets"> vs. Loaded<Product, { widgets: "orders" }> gets you the two different 2-level vs. 3-level types.
...granted, you just have to accept non-Go/non-native levels of performance., but tradeoffs. :-)
unknown|1 year ago
[deleted]
austinpena|1 year ago
marcus_holmes|1 year ago
Go's endless verbosity and lack of dynamic features is a blessing, not a curse. Because you have to write your own data access layer, you can break this blatantly false assumption that what you need in the UI is the same as what you need in the database.
To break the N+1 problem, you can do funky stuff like pull back sub-components as array fields in the main component, or pull multiple result sets, or concatenate the sub-component data into a single text field, whatever your UI requires. Because you're not forcing the databases query to map 1:1 with whatever structs you've got inside your application, you can get creative with how you structure the database storage, and how you query that storage. You can create funcs in the database that do crazy shit with the data to return exactly what the UI requires in a custom struct. Because it all runs on the database itself it's fast and painless. Because you have to manually map all the structs in your application yourself (thanks Go!) then you are not constrained to mirror the database structure in your code. Your UI code does what it needs, your database does what it needs, and you can map between them however you want.
ActiveRecord is easy to use, and very concise, which is what it optimises for. Go is not optimising for the same thing. The author is trying to recreate ActiveRecord in Go without realising that ActiveRecord is a straightjacket, and obviously struggling. If you free yourself from those constraints, the world becomes a simpler, better, place.
dventimi|1 year ago
kstrauser|1 year ago
In my experience it’s the biggest culprit behind random system fallovers that “work on my machine”. Well sure. Your test page load only resulted in 8 round trips to the server with a hidden O(n^2) in-process merge. The same page on the fully populated prod server required 137 round trips, and it still fails under load even though you can’t physically fit more RAM in the thing.
ElatedOwl|1 year ago
superice|1 year ago
saghm|1 year ago
Isn't that the point? Repeated network calls will cause latency even if the data is minimal; the total amount of data returned will always be the same, so the "N" extra network calls are pure overhead. For applications where the amount of data is usually small, network calls will likely be the bottleneck.
slotrans|1 year ago
Please get out there and write some SQL. I promise it won't hurt you.
fragmede|1 year ago
erikpukinskis|1 year ago
- render
- container
- context
- base
- model
- action
- component
- bake
- build
- generate
- view
- control
- process
- resource
Any I should add/remove?
sigmonsays|1 year ago
engineering around deficiencies sometimes yields interesting results but this isn't one of them.
I'd say this code is spaghetti.