top | item 35506662

(no title)

morrbo | 2 years ago

massive YMMV moment for me. my particular usecase was "extract the following attributes from a load of unstructured text, format the results as JSON". ChatGPT was the best (but only on 4 and Davinci), Vicuna just didn't perform at all (nor other variants of llama 7/13/33). Bard smashed it, relatively speaking, in terms of speed. I gave up pretty quickly though because of no information on pricing and/or API. It's funny how all-or-nothing these things seem to be

discuss

order

avereveard|2 years ago

On the smaller models you may want to split the task in smaller chunk either in parallel one value at a time or in sequence like extract the attributes, then take the output and ask to format it into a json

Tostino|2 years ago

In relation to this, when using GPT4, I have added the addendum to my prompts: "This seems like a lot of work, please split the work into two chunks, and let's start on the first chunk now."

It will generally segment the problem in some logical way and work just fine, with vastly improved reasoning abilities due to not trying to do as much at once.