(no title)
nateroling | 6 months ago
Seems like an LLM should be able to judge a prompt, and collaboratively work with the user to improve it if necessary.
nateroling | 6 months ago
Seems like an LLM should be able to judge a prompt, and collaboratively work with the user to improve it if necessary.
alexc05|6 months ago
https://www.dbreunig.com/2025/06/10/let-the-model-write-the-... is an example.
You can see the hands on results in this hugging face branch I was messing around in:
here is where I tell the LLM to generate prompts for me based on research so far
https://github.com/AlexChesser/transformers/blob/personal/vi...
here is the prompts that produced:
https://github.com/AlexChesser/transformers/tree/personal/vi...
and here is the result of those prompts:
https://github.com/AlexChesser/transformers/tree/personal/vi.... (also look at the diagram folders etc..)
gerad|6 months ago
claw-el|6 months ago
chopete3|6 months ago
Write your prompt in some shape and ask grok
Please rewrite this prompt for higher accuracy
-- Your prompt
AlecSchueler|6 months ago
How do you know it won't introduce misinformation about white genocide into your prompt?
user3939382|6 months ago
CuriouslyC|6 months ago
slt2021|6 months ago