(no title)
cjdell | 2 years ago
On top of that it would be good if the safety LLM could give a confidence score in the answer given by the main LLM. Then you can try multiple attempts with different parameters and only show the highest confidence answer to the user.
No comments yet.