top | item 46344294

(no title)

redlock | 2 months ago

“But clearly the difference between LLMs in 2025 and 2023 is not as large as between 2023 and 2021.”

This is a ridiculous statement. A simple example of the huge difference is context size.

GPT-4 was, what, 8K? Now we’re in the millions with good retention. And this is just context size, let alone reasoning, multimodality, etc.

discuss

order

Anamon|2 months ago

I don't think that refutes the point. I'd readily agree with the parent that in terms of actual usefulness and efficiency gains, we're on a trajectory of diminishing returns.

spider-mario|2 months ago

The point made by the parent seems to be pretty much the opposite of that. They conceded more tooling but questioned the improvements “at the foundational model level”.

emp17344|2 months ago

Gemini’s 2M context window is kind of a gimmick and not useable in practice.

redlock|2 months ago

Not true anymore since Gemini 2.5 pro

I have quizzed it with three books (total more than 1500 pages) and it gave great answers.

Initially yes when they released 2 million context with Gemini 1.5 it wasn’t effective.

Try it with Gemini 3 pro/flash now.