top | item 44838153

(no title)

jnmandal | 6 months ago

I think its likely that soon the "AI" aspect of the "models" get worse and worse with each iteration. This is because:

1. data pollution/dilution -- with each subsequent generation more and more LLM-produced content will make up the bulk of the web this means it either gets into get into the data set, which makes the model "spikey" or reduces the relevant knowledge available which dumbs the model down 2. LLMs were sort of a freak breakthrough in deep learning and while its remarkable, tuning them only makes them marginally better. Diminishing returns apply here. Its a new LLM but its still an LLM -- years old technology now

However, despite these two realities, the total utility/productivity/societal gain from a product (note I didn't say model) like ChatGPT can still increase by orders of magnitude. That is because companies like OpenAI, and many, many, other technology corporations and startups are figuring out how to leverage LLMs to do stuff much more powerful than just answer questions from the corpus (ie: perform quality research, reevaluate itself, computer controls, etc, etc).

Consider for example, flat screen displays were pioneered in like the 50's and arguably, they didn't become disruptively useful until the advent of smart phones. So yeah, the model may get sort of worse or maybe more brittle, but it almost doesn't matter if they are figuring out what to do with the model and making the model more useful for actual tasks. Sure its cute to talk to an AI persona and ask it questions but that is probably the least important aspect of these type of models. Microsoft word had Clippy, and yeah that was cool. But productivity gain came from word processing, the '.doc' filetype, filesharing, editing, etc. Clippy is just a meme now and that's a likely future scenario for the "chat" features in LLM products IMHO.

discuss

order

No comments yet.