top | item 47057873

(no title)

scoopdewoop | 12 days ago

Prompting LLMs for code simply takes more than a couple of weeks to learn.

It takes time to get an intuition for the kinds of problems they've seen in pre-training, what environments it faced in RL, and what kind of bizarre biases and blindspots it has. Learning to google was hard, learning to use other peoples libraries was hard, and its on par with those skills at least.

If there is a well known design pattern you know, thats a great thing to shout out. Knowing what to add to the context takes time and taste. If you are asking for pieces so large that you can't trust them, ask for smaller pieces and their composition. Its a force multiplier, and your taste for abstractions as a programmer is one of the factors.

In early usenet/forum days, the XY problem described users asking for implementation details of their X solution to Y problem, rather than asking how to solve Y. In llm prompting, people fall into the opposite. They have an X implementation they want to see, and rather than ask for it, they describe the Y problem and expect the LLM to arrive at the same X solution. Just ask for the implementation you want.

Asking bots to ask bots seems to be another skill as well.

discuss

order

overgard|12 days ago

Let me clarify, I've been using the latest models for the last two weeks, but I've been using AI for about a year now. I know how to prompt. I don't know why people think it's an amazing skill, it's not much different from writing a good ticket.

keeda|12 days ago

Writing a good ticket is not a common skill. IMO it seems deceptively easy but usually requires years of experience to understand what to include and express it in the most concise yet unambiguous terms possible for the intended audience.