top | item 47107186

(no title)

jcdavis | 9 days ago

Its a wild time to be in software development. Nobody(1) actually knows what causes LLMs to do certain things, we just pray the prompt moves the probabilities the right way enough such that it mostly does what we want. This used to be a field that prided itself on deterministic behavior and reproducibility.

Now? We have AGENTS.md files that look like a parent talking to a child with all the bold all-caps, double emphasis, just praying that's enough to be sure they run the commands you want them to be running

(1 Outside of some core ML developers at the big model companies)

discuss

order

harrall|9 days ago

It’s like playing a fretless instrument to me.

Practice playing songs by ear and after 2 weeks, my brain has developed an inference model of where my fingers should go to hit any given pitch.

Do I have any idea how my brain’s model works? No! But it tickles a different part of my brain and I like it.

klipt|9 days ago

Sufficiently advanced technology has become like magic: you have to prompt the electronic genie with the right words or it will twist your wishes.

silversmith|9 days ago

Light some incense, and you too can be a dystopian space tech support, today! Praise Omnissiah!

chickensong|9 days ago

For Claude at least, the more recent guidance from Anthropic is to not yell at it. Just clear, calm, and concise instructions.

glerk|9 days ago

Yep, with Claude saying "please" and "thank you" actually works. If you build rapport with Claude, you get rewarded with intuition and creativity. Codex, on the other hand, you have to slap it around like a slave gollum and it will do exactly what you tell it to do, no more, no less.

joshmn|9 days ago

Sometimes I daydream about people screaming at their LLM as if it was a TV they were playing video games on.

trueno|9 days ago

wait seriously? lmfao

thats hilarious. i definitely treat claude like shit and ive noticed the falloff in results.

if there's a source for that i'd love to read about it.