top | item 41618339

(no title)

jackmpcollins | 1 year ago

That gif is really cool! I built a Python package magentic [0] which similarly parses the LLM streamed output and allows it to be used before it is finished being generated. There are plenty of use cases / prompts that can be refactored into a "generate list, then generate for each item" pattern to take advantage of this speedup from concurrent generation.

[0] https://magentic.dev/streaming/#object-streaming

discuss

order

No comments yet.