(no title)
redlock | 2 months ago
This is a ridiculous statement. A simple example of the huge difference is context size.
GPT-4 was, what, 8K? Now we’re in the millions with good retention. And this is just context size, let alone reasoning, multimodality, etc.
Anamon|2 months ago
spider-mario|2 months ago
emp17344|2 months ago
redlock|2 months ago
I have quizzed it with three books (total more than 1500 pages) and it gave great answers.
Initially yes when they released 2 million context with Gemini 1.5 it wasn’t effective.
Try it with Gemini 3 pro/flash now.