In fact, when I compared it to Prolog, it was exactly with the view that, back in 2011, it was already possible to define a sentence/text as grammatical or ungrammatical with just a few lines of code. What LLM models like ChatGPT do is generate text based on corpora distributed across the web, grouped, tokenized and trained with a general purpose, but, in the end, they still need the same rules as Prolog to determine whether or not they can regurgitate the text to the user.The problem is that most people can't see that rabbit r1 is a deceptive product, at least. chatGPT (and Gemini, Claude and many others) doesn't do this (it doesn't trick its users into thinking that the product does one thing, but it actually does another).
tiagod|1 year ago
mtgr18977|1 year ago
As I say, I know i pushed the limits, that's on me.