Ask HN: Batch Processing with OpenAI
9 points| nantersand | 2 years ago
Any packages that would handle running things async, doing retries, handling the occasional hung connection, and logging performance?
Tried helicone, but I don't think it handles the hung connections.
Just doing it manually for now, but there must be an existing solution?
bob1029|2 years ago
Your state machine could be as simple as: New, Processing, Failed, Succeeded. Outer loop will query the collection every ~second for items that are New or Failed and retry them. Items that are stuck Processing for more than X seconds should be forced to Failed each loop through (you'll retry them on the next pass). Each state transition is written to a log with timestamps for downstream reporting. Failures are exclusively set by the HTTP processing machinery with timeouts being detected as noted above.
Using SQL would make iterations of your various batch processing policies substantially easier. Using a SELECT statement to determine your batch at each iteration would permit adding constraints over aggregates. For example, you could cap the # of simultaneous in-flight requests, or abandon all hope & throw if the statistics are looking poor (aka OpenAI outage).
nantersand|2 years ago
noonething|2 years ago
psimm|2 years ago
It's a wrapper for OpenAI's Python sample script plus adjacent functionality like cost estimation and binpacking multiple inputs into one request.
nantersand|2 years ago
dsalzman|2 years ago
tmaly|2 years ago
I would consider using one of the 32k context windows.
Define a delimiter for the paragraphs and prefix the prompt to process each then write out with a delimiter the result you need.
Maybe wrap your call in a simple try catch and do exponential back-off.
kolinko|2 years ago
For async and retries I just asked the bot to write proper code, and it worked fine. Could do it myself, but it would take longer.
With hung connections - I didn't experience those, but it should also be straightforward.
nantersand|2 years ago
But... the async code seems to cause a lot of dead connections, which seem to prevent any new ones.
aiunboxed|2 years ago
rboyd|2 years ago