top | item 46839791

(no title)

adamzwasserman | 29 days ago

Although I write very little code myself anymore, I don't trust AI code at all. My default assumption: every line is the most mid possible implementation, every important architecture constraint violated wantonly. Your typical junior programmer.

So I run specialized compliance agents regularly. I watch the AI code and interrupt frequently to put it back on track. I occasionally write snippets as few-shot examples. Verification without reading every line, but not "vibe checking" either.

discuss

order

mikaelaast|29 days ago

I like this. The few-shot example snippet method is something I’d like to incorporate in my workflow, to better align generated code with my preferences.

adamzwasserman|29 days ago

I have written a research paper on another interesting prompting technique that I call axiomatic prompting. On objectively measurable tasks, when an AI scores below 70%, including clear axioms in the prompt systematically increases success.

In coding this would convert to: when trying to impose a pattern or architecture that is different enough from the "mid" programming approach that the AI is compelled to use, including axioms about the approach (in a IF this THEN than style, as opposed to few shot examples) will improve success.

The key is the 70% threshold: if the model already has enough training data, axioms hurt. If the model is underperforming because the training set did -not- have enough examples (for example hyperscript), axioms helps.