I don’t think it’s a very valid concern, but LLMs can already do a limited level of reasoning to create information not directly loaded into them. Sometimes that reasoning is even correct! The idea here would be that even if you didn’t load that kind of information into them, at a certain level of advancement, they can determine the necessary plans from other information that has been loaded into them.
lolc|1 year ago