Isn’t this basically the Swiss cheese model? If your two input AIs hallucinate, or your consensus AI misunderstands the input, you will still have confabulations in the output?
I have this same thought, and have tried similar approaches.
OP: Have you trained or fine tuned a model that specifically reasons the worker model inputs against the user input? Or is this basically just taking a model and turning the temperature down to near 0?
From all my testing, this never really happened even once honestly, plus the judge model (that I've kept strictly a reasoning model) also evaluates individually before "judging" the consensus.
TheKelsbee|10 months ago
OP: Have you trained or fine tuned a model that specifically reasons the worker model inputs against the user input? Or is this basically just taking a model and turning the temperature down to near 0?
kuberwastaken|10 months ago
kuberwastaken|10 months ago