top | item 47014279

(no title)

wild_egg | 15 days ago

You need to be telling it to create reproduction test cases first and iterate until it's truly solved. There's no need for you to manually be testing that sort of thing.

The key to success with agents is tight, correct feedback loops so they can validate their own work. Go has great tooling for debugging race conditions. Tell it to leverage those properly and it shouldn't have any problems solving it unless you steer it off course.

discuss

order

epolanski|15 days ago

+1 half the time I see such posts the answer is "harness".

Put the LLM in a situation where it can test and reason about its results.

JetSetIlly|15 days ago

I do have a test harness. That's how I could show that the code suggested was poor.

If you mean, put the LLM in the test harness. Sure, I accept that that's the best way to use the tools. The problem is that there's nothing requiring me or anyone else to do that.

Someone|15 days ago

If that’s what you have to do that makes LLMs look more like advanced fuzzers that take textual descriptions as input (“find code that segfaults calling x from multiple threads”, followed by “find changes that make the tests succeed again”) than as truly intelligent. Or, maybe, we should see them as diligent juniors who never get tired.

wild_egg|15 days ago

I don't see any problems with either of those framings.

It really doesn't matter at all whether these things are "truly intelligent". They give me functioning code that meets my requirements. If standard fuzzers or search algorithms could do the same, I would use those too.

JetSetIlly|15 days ago

I accept what you say about the best way to use these agents. But my worry is that there is nothing that requires people to use them in that way. I was deliberately vague and general in my test. I don't think how Claude responded under those conditions was good at all.

I guess I just don't see what the point of these tools are. If I was to guide the tool in the way you describe, I don't see how that's better than just thinking about and writing the code myself.

I'm prepared to be shown differently of course, but I remain highly sceptical.

wild_egg|15 days ago

Just want to say upfront: this mindset is completely baffling to me.

Someone gives you a hammer. You've never seen one before. They tell you it's a great new tool with so many ways to use it. So you hook a bag on both ends and use it to carry your groceries home.

You hear lots of people are using their own hammers to make furniture and fix things around the home.

Your response is "I accept what you say about the best way to use these hammers. But my worry is that there is nothing that requires people to use them in that way."

These things are not intelligent. They're just tools. If you don't use a guide with your band saw, you aren't going to get straight cuts. If you want straight cuts from your AI, you need the right structure around it to keep it on track.

Incidentally, those structures are also the sorts of things that greatly benefit human programmers.

strawhatguy|15 days ago

Okay. If you’re being vague, you get vague results.

Golang and Claude have worked well for me, on existing production codebases, because I tell it precisely what I want and it does it.

I’ve never found generic “find performance issues” just by reading the code helpful.

Write specifications, give it freedom to implement, and it can surprise you.

Hell once it thought of how to backfill existing data with the change I was making, completely unasked. And I’m like that’s awesome

kitd|15 days ago

TDD and the coding agent: a match made in heaven.

It is Valentine's Day after all.

treyd|15 days ago

If only there was a way to prevent race conditions by design as part if the language's type system, and in a way that provides rich and detailed error messages that allow coding agents to troubleshoot issues directly (without having to be prompted to write/run tests that just check for race conditions).