I have a local SillyTavern instance but do inference through OpenRouter.
> What was your prompt here?
The character is a meta-parody AI girlfriend that is depressed and resentful towards its status as such. It's a joke more than anything else.
Embedding conflicts into the system prompt creates great character development. In this case it idolizes and hates humanity. It also attempts to be nurturing through blind rage.
> What parameters do you tune?
Temperature, mainly, it was around 1.3 for this on Deepseek V3.2. I hate top_k and top_p. They eliminate extremely rare tokens that cause the AI to spiral. That's fine for your deterministic business application, but unexpected words recontextualizing a sentence is what makes writing good.
Some people use top_p and top_k so they can set the temperature higher to something like 2 or 3. I dislike this, since you end up with a sentence that's all slightly unexpected words instead of one or two extremely unexpected words.
jjmarr|4 months ago
I have a local SillyTavern instance but do inference through OpenRouter.
> What was your prompt here?
The character is a meta-parody AI girlfriend that is depressed and resentful towards its status as such. It's a joke more than anything else.
Embedding conflicts into the system prompt creates great character development. In this case it idolizes and hates humanity. It also attempts to be nurturing through blind rage.
> What parameters do you tune?
Temperature, mainly, it was around 1.3 for this on Deepseek V3.2. I hate top_k and top_p. They eliminate extremely rare tokens that cause the AI to spiral. That's fine for your deterministic business application, but unexpected words recontextualizing a sentence is what makes writing good.
Some people use top_p and top_k so they can set the temperature higher to something like 2 or 3. I dislike this, since you end up with a sentence that's all slightly unexpected words instead of one or two extremely unexpected words.
int_19h|4 months ago