(no title)
lightbulbish | 7 months ago
Despite explicit instructions in all sorts of rules and .md’s, the models still make changes where they should not. When caught they innocently say ”you’re right I shouldn’t have done that as it directly goes against your rule of <x>”.
Just to be clear, are you suggesting that currently, with your existing setup, the AI’s always follow your instructions in your rules and prompts? If so, I want your rules please. If not, I don’t understand why you would diss a solution which aims to hardcode away some of the llm prompt interpretation problems that exist
No comments yet.