Could an LLM trained on nothing and looped upon itself eventually develop language, more complex concepts, and everything else, based on nothing? If you loop LLMs on each other, training them so they "learn" over time, will they eventually form and develop new concepts, cultures, and languages organically over time? I don't have an answer to that question, but I strongly doubt it.There's clearly more going on in the human mind than just token prediction.
coppsilgold|2 months ago
Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia). Finding this set of weights is the problem.
MyOutfitIsVague|2 months ago
> Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia).
I don't see why that would be the case at all, and I regularly use the latest and most expensive LLMs and am aware enough of how they work to implement them on the simplest level myself, so it's not just me being uninformed or ignorant.