top | item 37042078

(no title)

toxicFork | 2 years ago

AFAIK it's finding out what prompts to use for what LLM to get the answer your want

E.g. this

> Compare response quality across prompt permutations, across models, and across model settings to choose the best prompt and model for your use case.

discuss

order

No comments yet.