top | item 42964221

(no title)

BWStearns | 1 year ago

As far as I know there's not really LLMs good enough that can run locally. Maybe with the R1 improvements and future derivative work that'll change.

We use about 4-6 calls per improvement and use a mix of Anthropic and OpenAI. Interestingly we really couldn't get sufficiently good performance from just one model. It's interesting how they can be good or bad at different tasks where one task doesn't seem materially harder than the other.

discuss

order