I'm about to start hosting apps that need a preconfigured AI context (basically feed it an opening script/instructions), for users to interact with.
This is really just to test the concept, watch the token usage rates, etc.
Since you feed in the entire script with each pass, the API side could simple prepend the origin instructions/first parts, then tack on the user input to the end.
Look in the openai.ts file for an example of prepending messages... in this case, I actually prepend one just to embed a random seed (otherwise the same input always gives the same response).
kwhitley|2 years ago
This is really just to test the concept, watch the token usage rates, etc.
Since you feed in the entire script with each pass, the API side could simple prepend the origin instructions/first parts, then tack on the user input to the end.
kwhitley|2 years ago