top | item 44087599

(no title)

pauldix | 9 months ago

My experience so far with Opus 4 is that it's very good. Based on a few days of using it for real work, I think it's better than Sonnet 3.5 or 3.7, which had been my daily drivers prior to Gemini 2.5 Pro switching me over just 3 weeks ago. It has solved some things that eluded Gemini 2.5 Pro.

Right now I'm swapping between Gemini and Opus depending on the task. Gemini's 1M token context window is really unbeatable.

But the quality of what Opus 4 produces is really good.

edit: forgot to mention that this is all for Rust based work on InfluxDB 3, a fairly large and complex codebase. YMMV

discuss

order

Workaccount2|9 months ago

I've been having really good results from Jules, which is Google's gemini agent coding platform[1]. In the beta you only get 5 tasks a day, but so far I have found it to be much more capable than regular API Gemini.

[1]https://jules.google/

trip-zip|9 months ago

Would you mind giving a little more info on what you're getting Jules to work on? I tried it out a couple times but I think I was asking for too large a task and it ended up being pretty bad, all things considered.

I tried to get it to add some new REST endpoints that follow the same pattern as the other 100 we have, 5 CRUD endpoints. It failed pretty badly, which may just be an indictment on our codebase...

loufe|9 months ago

I let Jules write a PR in my codebase with very specific scaffolding, and it absolutely blew it. It took me more time to understand the ways it failed to grasp the codebase and wrote code for a fundamentally different (incorrectly understood) project. I love Gemini 2.5, but I absolutely agree with the gp (pauldix) on their quality / scope point.

smokel|9 months ago

> Gemini's 1M token context window is really unbeatable.

How does that work in practice? Swallowing a full 1M context window would take in the order of minutes, no? Is it possible to do this for, say, an entire codebase and then cache the results?

ZeroCool2u|9 months ago

In my experience with Gemini it definitely does not take a few minutes. I think that's a big difference between Claude and Gemini. I don't know exactly what Google is doing under the hood there, I don't think it's just quantization, but it's definitely much faster than Claude.

Caching a code base is tricky, because whenever you modify the code base, you're invalidating parts of the cache and due to conditional probability any changed tokens will change the results.

pauldix|9 months ago

Right now this is just in the AI Studio web UI. I have a few command line/scripts to put together a file or two and drop those in. So far I've put in about 450k of stuff there and then over a very long conversation and iterations on a bunch of things built up another 350k of tokens into that window.

Then start over again to clean things out. It's not flawless, but it is surprising what it'll remember from a while back in the conversation.

I've been meaning to pick up some of the more automated tooling and editors, but for the phase of the project I'm in right now, it's unnecessary and the web UI or the Claude app are good enough for what I'm doing.

cleak|9 months ago

I’m curious about this as well, especially since all coding assistants I’ve used truncate long before 1M tokens.