top | item 39944996

Favorites from our prompt engineering tournament

39 points| jzone3 | 2 years ago |blog.promptlayer.com | reply

20 comments

order
[+] six_four_eight|2 years ago|reply
Towards the end they state: ‘… just adding “do not hallucinate” has been shown to reduce the odds a model hallucinates.’ I find this surprising and doesn’t fit with my understanding of how a language model works. But I’m very much a novice. Would this be due to update training including feedback that marks bad responses with the term “hallucinate”?
[+] ManuelKiessling|2 years ago|reply
My mental model is “telling a LLM to not make any mistakes is like telling a depressed person to stop feeling bad”.
[+] anotheryou|2 years ago|reply
my model is that you need to tell give it ways to make the hallucination not the most plausible thing. I prefer to tell it that it can say "I don't know"
[+] a1371|2 years ago|reply
Prompt engineering doesn't feel like an activity that creates sustainable AI advancement. A prompt may work well with one model, in most situations, but even the best practices seem too experimental.

For their competition to avoid a PR disaster, isn't it better to look in the model? Perhaps observe the weights, when the AI says something that you want to avoid in the future. A safeguard could trigger if the model is going in that direction.

[+] kang|2 years ago|reply
> Prompt engineering doesn't feel like an activity that creates sustainable AI advancement.

Chatgpt was created from gpt via prompt engineering? An inverse chatgpt where user answers questions instead of the other way around also has applications.

[+] dvt|2 years ago|reply
> Prompt engineering doesn't feel like an activity that creates sustainable AI advancement.

Agreed, it should really be rolled into fine tuning. If you're building a model for PR, for example, it should already be fine tuned so it can't say anything disastrous. Prompt engineering is only really relevant to general-purpose models which aren't that useful to begin with (other than "fun" chatting).

[+] LZ_Khan|2 years ago|reply
LLM's are trained in much the same way, so while your point stands, most/all of the tips here are going to be useful for LLM's for at least a year or so.

If a tip was like "use XML tags to give clarity to the model," then it wouldn't be sustainable.

[+] agys|2 years ago|reply
Last year I’ve seen a live “prompt battle” and it was great: a single-elimination tournament with an applause meter, a hype-man and music!

https://promptbattle.com

[+] snapcaster|2 years ago|reply
Interesting analogue to books like "How to win friends and influence people". This genre of self help books include a lot of things that when you squint look like prompt engineering on humans
[+] babyshake|2 years ago|reply
Although doesn't the Dale Carnegie book focus mostly on being a good listener and using emotional intelligence? Prompt engineering doesn't seem to be very similar to this AFAIK.
[+] NegativeLatency|2 years ago|reply
I couldn't find any results, just prompts, did I miss something?
[+] furyofantares|2 years ago|reply
I guess it was an accurately titled article for once
[+] eggfriedrice|2 years ago|reply
I'm not sure I understood much except the photo of the cake
[+] xyst|2 years ago|reply
Am I the only one annoyed by the term “prompt engineer{ing}”?

I thought this was a meme, but I have actually seen some job posts for “prompt engineer”.

[+] makk|2 years ago|reply
Those job posts are seeking engineers who show up on time and deliver their work expeditiously.
[+] kataklasm|2 years ago|reply
Just as much as by the equivalence of software developer and software engineer. Both are a misuse of the term "engineer". Maybe one more than the other, but still.
[+] n4r9|2 years ago|reply
I get a feeling of incongruity about the term. I think it's because prompts aren't a "system". At least as far as I understand it.