(no title)
bclavie | 3 years ago
The constraints can be put in place through a bunch of different things. The prompt engineering is a big thing, instruction-tuned models can be pretty good at following very restrictive instructions. You do end up sacrificing some creativity in your answers by adding a lot of restrictions but it generally works quite well as a safeguard layer. A lot of the cool LLM applications are, first and foremost, proper prompting. Setting a low temperature is also key, as the higher likelihood suggestions _generally_ (but not always) are less made-up. ChatGPT makes this a bit harder as you have no control over the model parameters (temperature is OpenAI-set) and cannot control the original prompt, meaning you can't fully be in charge of the instructions it gets, so any mitigation to avoid hallucinations will have its limits.
After that yeah, the context documents you provide are pretty important in grounding it. It ties back in with the prompt, but you can more or less drill it into a low-temperature instruction-fine tuned model that if it can't find the answer within a set of documents you provide it, it should simply not answer. Again, you lose out in some contexts (it's a bad feeling on the user's end to not get an answer) but you also ensure that your model isn't live-freewheeling about a new framework called Reagular...
No comments yet.