top | item 38658019

(no title)

hmage | 2 years ago

You're essentially programming using English. Anything that isn't mentioned explicitly - the model will have a tendency to misinterpret. Being extremely exact is very similar to software engineering when coding for CPU's.

discuss

order

3xnl|2 years ago

I don't think so. It still remains that you are asking a question?

hmage|2 years ago

1. The text is _engineered_ to evoke a specific response.

2. LLM's can do more than answer questions.

3. Question answering usually doesn't need any prompt engineering, since you're essentially asking an opinion where any answer is valid (different characters will say different things to same question, and that's valid).

4. LLM's aren't humans, so it misses nuance a lot and hallucinates facts confidently, even GPT4, so you need to handhold it with "X is okay, Y is not, Z needs to be step by step", etc.

I want, for example, to make it write an excerpt from a fictional book, but it gets a lot of things wrong, so I add more and more specifics into my prompt. It doesn't want to swear, for example - I engineer the prompt so that it thinks it's okay to do so, etc.

"Engineer" is a verb here, not a noun. It's perfectly valid to say "Prompt Engineering", since this is the same word used in 'The X was engineered to do Y' sentence.

Anthropic also have their prompt engineering documentation - https://docs.anthropic.com/claude/docs/constructing-a-prompt - this article gives examples of bad and good prompts.