Towards the end they state: ‘… just adding “do not hallucinate” has been shown to reduce the odds a model hallucinates.’ I find this surprising and doesn’t fit with my understanding of how a language model works. But I’m very much a novice. Would this be due to update training including feedback that marks bad responses with the term “hallucinate”?
my model is that you need to tell give it ways to make the hallucination not the most plausible thing. I prefer to tell it that it can say "I don't know"
Prompt engineering doesn't feel like an activity that creates sustainable AI advancement. A prompt may work well with one model, in most situations, but even the best practices seem too experimental.
For their competition to avoid a PR disaster, isn't it better to look in the model? Perhaps observe the weights, when the AI says something that you want to avoid in the future. A safeguard could trigger if the model is going in that direction.
> Prompt engineering doesn't feel like an activity that creates sustainable AI advancement.
Chatgpt was created from gpt via prompt engineering? An inverse chatgpt where user answers questions instead of the other way around also has applications.
> Prompt engineering doesn't feel like an activity that creates sustainable AI advancement.
Agreed, it should really be rolled into fine tuning. If you're building a model for PR, for example, it should already be fine tuned so it can't say anything disastrous. Prompt engineering is only really relevant to general-purpose models which aren't that useful to begin with (other than "fun" chatting).
LLM's are trained in much the same way, so while your point stands, most/all of the tips here are going to be useful for LLM's for at least a year or so.
If a tip was like "use XML tags to give clarity to the model," then it wouldn't be sustainable.
Interesting analogue to books like "How to win friends and influence people". This genre of self help books include a lot of things that when you squint look like prompt engineering on humans
Although doesn't the Dale Carnegie book focus mostly on being a good listener and using emotional intelligence? Prompt engineering doesn't seem to be very similar to this AFAIK.
Just as much as by the equivalence of software developer and software engineer. Both are a misuse of the term "engineer". Maybe one more than the other, but still.
[+] [-] six_four_eight|2 years ago|reply
[+] [-] ManuelKiessling|2 years ago|reply
[+] [-] anotheryou|2 years ago|reply
[+] [-] a1371|2 years ago|reply
For their competition to avoid a PR disaster, isn't it better to look in the model? Perhaps observe the weights, when the AI says something that you want to avoid in the future. A safeguard could trigger if the model is going in that direction.
[+] [-] kang|2 years ago|reply
Chatgpt was created from gpt via prompt engineering? An inverse chatgpt where user answers questions instead of the other way around also has applications.
[+] [-] dvt|2 years ago|reply
Agreed, it should really be rolled into fine tuning. If you're building a model for PR, for example, it should already be fine tuned so it can't say anything disastrous. Prompt engineering is only really relevant to general-purpose models which aren't that useful to begin with (other than "fun" chatting).
[+] [-] LZ_Khan|2 years ago|reply
If a tip was like "use XML tags to give clarity to the model," then it wouldn't be sustainable.
[+] [-] agys|2 years ago|reply
https://promptbattle.com
[+] [-] snapcaster|2 years ago|reply
[+] [-] babyshake|2 years ago|reply
[+] [-] NegativeLatency|2 years ago|reply
[+] [-] furyofantares|2 years ago|reply
[+] [-] eggfriedrice|2 years ago|reply
[+] [-] TheDudeMan|2 years ago|reply
[+] [-] xyst|2 years ago|reply
I thought this was a meme, but I have actually seen some job posts for “prompt engineer”.
[+] [-] makk|2 years ago|reply
[+] [-] kataklasm|2 years ago|reply
[+] [-] n4r9|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] unknown|2 years ago|reply
[deleted]