(no title)
VieEnCode | 1 year ago
As far as I am aware, the author is right to claim that there is no solution to AI hallucination on the horizon, which is a severely limiting problem. Also I understand we are reaching the boundaries of useful training data available. Both factors suggest current AI improvement simply cannot follow the same trajectory as transistors under Moore's law.
wegfawefgawefg|1 year ago
The average person just doesnt care about hallucination that much and probably doesnt even notice half the time.
Costs for gigantic models are really high now, but unless moores law stops they will come down. If moores law does stop, we have a bigger problem. Costs for all the other small models which were already useful 10 years ago, have gotten so cheap you can train them on a macbook, and deploy them on microcontrollers. (sound/image identification/detection in fleet microphones/cameras) That was big ML 8 years ago.
LLM's are not all of AI. There are tons of usecases other than just chatbots. I think people forget that theres models doing well depth alignment, trajectory planning, car tracking, liscense plate reading, etc.
Everyone is just burned out on the relentless advertising of gpt and are conflating that with all of ai. While this man is getting mad at sama for being cringe and hoping the ai world crashes and burns, the general technology is just propogating outwards to people who are actually using it.