top | item 41368191

(no title)

memothon | 1 year ago

I think the discussion around "exponentials" with top-end LLM (think 3.5 sonnet, gpt-4 not the smaller models) scaling is really pointless. The heuristic we have for what to expect from performance is just scaling, which has worked pretty well. These benchmarks are imperfect in lots of ways, aren't necessarily sensitive to showing exponential progress and it is difficult to predict step changes in capability in advance.

If you zoom out on the first graphic from December 2023 back to 2020, the capabilities of models released at that time on these benchmarks would be much much lower. The best lens for future performance of large models is uncertainty.

discuss

order

emregucerr|1 year ago

> The best lens for future performance of large models is uncertainty. 100% agree. I think to better way to phrase my argument there would be to reject the notion that LLMs are destined to get exponentially smarter (twitter fallacy). This is not to say I believe they are not going to get any smarter in the future. We simply don't know and building a company/product on the expectation of another Moore's Law is dangerous.