top | item 41083810

(no title)

realprimoh | 1 year ago

So do most LLMs now days, no?

discuss

order

viraptor|1 year ago

Kind of. Those that are explicitly trained to do that with consistent formats will do it better. They'll also save you the extra tokens needed to explain the format/method of interacting with functions. But yeah, you can simulate this with any recent model and enough explanation.