top | item 40473952

(no title)

sp332 | 1 year ago

Fine tuning can be useful if you need to generate lots of output in a particular format. You can fine-tune on formatted messages, and then the model will generate that automatically. That could save a bunch of tokens explaining the output format in every prompt.

discuss

order

NeutralForest|1 year ago

You can use structured generation instead of fiddling with the prompt, which is unreliable. https://github.com/outlines-dev/outlines

codetrotter|1 year ago

Does this Python package control the LLMs using something other than text? Or is the end result still that that Python package wraps your prompt with additional text containing additional instructions that become part of the prompt itself?