top | item 45436115

(no title)

srj | 5 months ago

FWIW I didn't downvote you. I don't work on AI personally, and while I have no way of proving it to you I certainly am not trying to shill for my employer.

My skepticism of AI safety is just because of skepticism of AI generally. These are amazing things, but I don't believe the technology is even a road to AGI. There's a reason it can give a chess move when prompted and explain all the rules and notation, but can't actually play chess: it's not in the training data. I simply think the hype and anxiety is unnecessary, is my issue. Now this is most definitely just my opinion and has nothing to do with that company I work for who I'd bet would disagree with me on all of this anyway. If I did believe this was a road to AGI I actually would be in favor of AI safety regulation.

discuss

order

xpe|5 months ago

> My skepticism of AI safety is just because of skepticism of AI generally. These are amazing things, but I don't believe the technology is even a road to AGI.

Thanks for your response. I'm curious how to state your claim in a way that you would feel is accurate. Would you say "LLMs are not a road to AGI"?

I put ~zero weight on what an arbitrary person believes until they clarify their ideas, show me their model, and give me a prediction. So:

- Clarify: What exactly do you mean by "a road to"? Does this mean you are saying any future technology that uses LLMs (for training? for inference? something else) won't assist the development of AGI?

- Model: On what model(s) of how the world works do you make your claims?

- Prediction: If you are right, when will we know and what will be observe?

srj|5 months ago

Yes I'm talking about LLMs in particular. I'm in the stochastic parrot camp. Though I could be convinced humans are no more than stochastic parrots, in which case it does have a path for development of AGI.

If I'm right the breakthroughs will plateau even while applications of the technology continue to advance for the next several years.

xpe|5 months ago

> There's a reason it can give a chess move when prompted and explain all the rules and notation, but can't actually play chess: it's not in the training data.

I don't understand how you can claim an LLM can't play chess. Just as one example, see: https://dynomight.net/chess/