top | item 40590356

(no title)

mtgr18977 | 1 year ago

In fact, when I compared it to Prolog, it was exactly with the view that, back in 2011, it was already possible to define a sentence/text as grammatical or ungrammatical with just a few lines of code. What LLM models like ChatGPT do is generate text based on corpora distributed across the web, grouped, tokenized and trained with a general purpose, but, in the end, they still need the same rules as Prolog to determine whether or not they can regurgitate the text to the user.

The problem is that most people can't see that rabbit r1 is a deceptive product, at least. chatGPT (and Gemini, Claude and many others) doesn't do this (it doesn't trick its users into thinking that the product does one thing, but it actually does another).

discuss

order

tiagod|1 year ago

I think you have a misconception of how these transformer models work.

mtgr18977|1 year ago

I know how they work, and I think it's a really good job. But in the end, they still have to perform a gramatical check (like Prolog sentences) to be sure the text should be sent.

As I say, I know i pushed the limits, that's on me.