top | item 47013870

(no title)

JetSetIlly | 15 days ago

Interesting. I've only dipped my toe in the AI waters but my initial experience with a Go project wasn't good.

I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.

I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.

I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.

I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.

Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.

discuss

order

wild_egg|15 days ago

You need to be telling it to create reproduction test cases first and iterate until it's truly solved. There's no need for you to manually be testing that sort of thing.

The key to success with agents is tight, correct feedback loops so they can validate their own work. Go has great tooling for debugging race conditions. Tell it to leverage those properly and it shouldn't have any problems solving it unless you steer it off course.

epolanski|15 days ago

+1 half the time I see such posts the answer is "harness".

Put the LLM in a situation where it can test and reason about its results.

Someone|15 days ago

If that’s what you have to do that makes LLMs look more like advanced fuzzers that take textual descriptions as input (“find code that segfaults calling x from multiple threads”, followed by “find changes that make the tests succeed again”) than as truly intelligent. Or, maybe, we should see them as diligent juniors who never get tired.

JetSetIlly|15 days ago

I accept what you say about the best way to use these agents. But my worry is that there is nothing that requires people to use them in that way. I was deliberately vague and general in my test. I don't think how Claude responded under those conditions was good at all.

I guess I just don't see what the point of these tools are. If I was to guide the tool in the way you describe, I don't see how that's better than just thinking about and writing the code myself.

I'm prepared to be shown differently of course, but I remain highly sceptical.

kitd|15 days ago

TDD and the coding agent: a match made in heaven.

It is Valentine's Day after all.

treyd|15 days ago

If only there was a way to prevent race conditions by design as part if the language's type system, and in a way that provides rich and detailed error messages that allow coding agents to troubleshoot issues directly (without having to be prompted to write/run tests that just check for race conditions).