top | item 44953330

(no title)

nateroling | 6 months ago

Can you write a prompt to optimize prompts?

Seems like an LLM should be able to judge a prompt, and collaboratively work with the user to improve it if necessary.

discuss

order

alexc05|6 months ago

100% yes! There've been some other writers who've been doing parallel work around that in the last couple weeks.

https://www.dbreunig.com/2025/06/10/let-the-model-write-the-... is an example.

You can see the hands on results in this hugging face branch I was messing around in:

here is where I tell the LLM to generate prompts for me based on research so far

https://github.com/AlexChesser/transformers/blob/personal/vi...

here is the prompts that produced:

https://github.com/AlexChesser/transformers/tree/personal/vi...

and here is the result of those prompts:

https://github.com/AlexChesser/transformers/tree/personal/vi.... (also look at the diagram folders etc..)

chopete3|6 months ago

I use Grok to write the prompts. Its excellent. I think human created prompts are insufficient in almost all cases.

Write your prompt in some shape and ask grok

Please rewrite this prompt for higher accuracy

-- Your prompt

AlecSchueler|6 months ago

Wouldn't you be better doing it with almost anything other than Grok?

How do you know it won't introduce misinformation about white genocide into your prompt?

user3939382|6 months ago

The LLM is basically a runtime that needs optimized input bc the output is compute bottlenecked. Input quality scales with domain knowledge, specificity and therefore human time input. You can absolutely navigate an LLMs attention piecemeal around a spec until you build an optimized input.

CuriouslyC|6 months ago

This is pretty much DSPy.

slt2021|6 months ago

yes, just prepend your request to llm with "Please give me a well-structured LLM prompt that will solve this problem..."