(no title)
jordn | 3 years ago
Worth pointing out that once you fine tune the models, you typically eliminate the prompt entirely. It also tends to narrow the capabilities considerably so I expect prompt injection will be much lower risk.
jordn | 3 years ago
Worth pointing out that once you fine tune the models, you typically eliminate the prompt entirely. It also tends to narrow the capabilities considerably so I expect prompt injection will be much lower risk.
muzani|3 years ago
For chat systems, a variation of 'AI:', 'Human:', 'You:', or 'username:'.
These occur a lot in samples, and then are reproduced in open source and copied prompts.
Three characters seems to be the optimum for higher temperature. Sometimes it outputs #### instead of #####, which doesn't trigger the stop sequence. Too short and it might confuse a #hashtag for a stop sequence.