top | item 38206870

(no title)

thorax | 2 years ago

Don't think so, but there were some guesses on 3.5-turbo-- i.e. training a much smaller model on quality questions/answers from GPT-4. Same tactic worked again and again for other LLMs.

I'm definitely curious on the context window increase-- I'm having a hard time telling if it's 'real' vs a fast specially trained summarization prework step. That being said, it's been doing a rather solid job not losing info in that context window in my minor anecdotal use cases.

discuss

order

No comments yet.