Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be.
overgard|9 days ago
tovej|9 days ago
This is still only possible if the prompts given by the user resembles what's in the corpus. And the same applies to the reasoning chain. For it to resemble actual logical reasoning, the same or extremely similar reasoning has to exist in the corpus.
This is not "just" semantics if your whole claim is that they are "synthesizing" new facts. This is your choice of misleading terminology which does not apply in the slightest.