top | item 44169305

(no title)

sach1 | 9 months ago

Yeah basically! I thiught it was a bit ridiculous that they would charge for something so easy to make yourself. For example, I have a series of md files with iterated prompts in a folder and start off a human generated prompt to feed into [insert favorite LLM].

an example seed: "create a prompt for an agent that will help me reduce prompt token usage and speed up results without losing necessary complexity. can you build a prompt that I use to this end?"

After a bunch of recursive prompting:

"Optimize the provided 'Original Prompt' into an 'Optimized Prompt'.

The 'Optimized Prompt' must:

- Be token-efficient. - Be maximally clear, precise, unambiguous, with direct instructions. - Be ideal for advanced AI model processing. - Preserve the 'Original Prompt's' core intent and task. - Retain 'Original Prompt's' details, nuances, analytical requirements, output formats, and complexity, without oversimplification.

Apply this optimization method:

1. From 'Original Prompt', eliminate: conversational filler, redundancy, pleasantries, self-references. 2. Use: strong, direct action verbs. 3. Be: specific, direct. Replace vague terms with precise equivalents. 4. Clearly state: task, context, constraints, output format. Explicitly define implied formats (e.g., list, JSON, steps). 5. Logically group related instructions. 6. Ensure 'Optimized Prompt' is a direct command."

It's brutishly simple, but part of the (imho self evident) process is editing the prompts as you continue to feed it back in on itself.

discuss

order

No comments yet.