top | item 43058349

(no title)

mcqueenjordan | 1 year ago

> But I just checked and, unsurprisingly, 4o seems to do reasonably well at generating Semgrep rules? Like: I have no idea if this rule is actually any good. But it looks like a Semgrep rule?

This is the thing with LLMs. When you’re not an expert, the output always looks incredible.

It’s similar to the fluency paradox — if you’re not native in a language, anyone you hear speak it at a higher level than yourself appears to be fluent to you. Even if for example they’re actually just a beginner.

The problem with LLMs is that they’re very good at appearing to speak “a language” at a higher level than you, even if they totally aren’t.

discuss

order

tptacek|1 year ago

Hold on, hold on. You're missing a step here.

I agree completely that an LLM's first attempt to write a Semgrep rule is likely as not to be horseshit. That's true of everything an LLM generates. But I'm talking about closed-loop LLM code generation. Unlike legal arguments and medical diagnoses, you can hook an LLM up to an execution environment and let it see what happens when the code it generates runs. It then iterates, until it has something that works.

Which, when you think about it, is how a lot of human-generated code gets written too.

So my thesis here does not depend on LLMs getting things right the first time, or without assistance.

bambax|1 year ago

The problem is what one means by "works". Is it just that it runs without triggering exceptions here and there?

One has to know, and understand, what the code is supposed to be doing, to evaluate it. Or use tests.

But LLMs love to lie so they can't be trusted to write the tests, or even to report how the code they wrote passed the tests.

In my experience the way to use LLMs for coding is exactly the opposite: the user should already have very good knowledge of the problem domain as well as the language used, and just needs to have a conversation with someone on how to approach a specific implementation detail (or help with an obscure syntax quirk). Then LLMs can be very useful.

But having them directly output code for things one doesn't know, in a language one doesn't know either, hoping they will magically solve the problem by iterating in "closed loops", will result in chaos.

danielbln|1 year ago

That's also the problem with these conversations. Some people evaluate zero-shot promoted code oozing out of gpt-3.5, others plug Sonnet into an IDE with access to terminal, LSP, diagnostics etc. crunching through a problem in an agentic self improvement loop. Those two approaches will generate very different quality levels of code.

vlovich123|1 year ago

An LLM though doesn’t truly understand the goal AND it frequently gets into circular loops it can’t get out of when the solution escapes its capability rather than asking for help. Hopefully it’ll get fixed but some of this stuff is an architectural problem rather than just iterating on the transformer idea.

mcqueenjordan|1 year ago

Yeah, I more or less agree about the closed loop part and the overall broader point the article was making in this context — that it may be a useful use case. I think it’s likely that process creates a lot of horseshit that passes through the process, but that might still be better than nothing for semgrep rules.

I only came down hard on that quote out of context because it felt somewhat standalone and I want to broadcast this “fluency paradox” point a bit louder because I keep running into people who really need to hear it.

I know you know what’s up.