top | item 36382680

(no title)

727374 | 2 years ago

Dataloader (https://github.com/graphql/dataloader) eliminates many n+1 issues. Basically, entity requests are queued up during evaluation of the query and then you can batch them up for fulfillment. It’s a great addition to other asynchronous contexts as well.

WRT random requests, there are libraries for query cost estimation you can use as a gate.

discuss

order

theptip|2 years ago

Interesting, hadn’t seen that package, thanks. The general pattern seems quite useful, is it implemented in any SQL ORMs?

Seems like you could do the same batching/coalesce strategy for async Postgres for example, but I don’t see anything after a quick scan of the docs in SQLAlchemy. (Seems like it would be feasible since they already batch requests in unit-of-work, they just don’t coalesce to bulk operations AFAICT.)

IceDane|2 years ago

Using something like `pothos` for your graphql backend, you can get tight integration with `prisma`, which will practically eliminate any N+1 issues.

For fields which hit external services, you can define types as "loadable" so that every time those are requested in a batch, they are loaded efficiently to avoid n+1.

latchkey|2 years ago

Correct, there are solutions out there, but unknown if they implemented them.