top | item 36025105

(no title)

barking_biscuit | 2 years ago

An absolutely huge part of it for humans is that as the complexity of the thing being reasoned about grows, the accuracy of the mental models declines. That and we also train billions of individual models on a narrow slice of reality, and leverage a distributed consensus protocol through low-bandwidth "conversations" to update the weights. I think you could probably emulate this with conversational agents each with their own LoRA, that way they wind up with differing opinions.

discuss

order

syngrog66|2 years ago

great insights! agreed