top | item 41065576

(no title)

welkinSL | 1 year ago

I wish to be proved wrong too. The current situation of "LLM" and "GPT" are very much like a hype. I cannot see how they can be used beyond problems that does NOT require accuracy and precision.

IMO the problem lies on "magical thinking" that we can just say a spell and have the computer presenting the best solution ever without even knowing what we specifically want at the first space. Kind of like asking the AI for the answer to the meaning of life, the universe, and everything - "forty-two" (from <<A Hitchhiker's Guide to Galaxy>>)

Personally, when I use them a lot of time I realised I am basically typing the whole detailed requirement specification out. First, it will be much more efficient for me to just do it myself. Second, in such cases existing technologies like formal methods seem more suitable as we can verify our specification's soundness too. And most modern approach to formal methods can do code generation too.

Latest research (https://arxiv.org/abs/2404.04125v1) on image-text and text-to-image models shows that these pre-trained few-shots learners exhibit a log-linear relationship with "concept frequencies" and their performance on tasks. Hopefully this does not translate to LLMs but my bet is that they do.

discuss

order

No comments yet.