top | item 45880796

(no title)

LeafItAlone | 3 months ago

>The thing that struck me most is how creative they are at finding new ways to fail

Wow, they are really going for that human-like behavior aren’t they?

discuss

order

verdverm|3 months ago

If we're talking about emulating users, sure, but this is supposed to be a tool that helps me get my job done.

If (i.e.) you dig into how something like copilot works, they do dumb things like ask^ the LLM to do glob matching after a file read (to pull in more instructions)... just use a damn glob library instead of a non-deterministic and known to be unreliable method

^ it's just a table in the overall context, so "asking" is a bit anthropomorphizing

cluckindan|3 months ago

I would consider a bunch of ”dumb/power user” agents more useful than coding agents. The more they fail to use my software, the better!

zahlman|3 months ago

> ^ it's just a table in the overall context, so "asking" is a bit anthropomorphizing

I interpreted GP as just saying that you are already anthropomorphizing too much by supposing that the models "find" new ways to fail (as if trying to defy you).