top | item 35207478

(no title)

netsroht | 2 years ago

This is why I prefer the term "weak AI". Weak AI is specifically trained to solve tasks whereas strong AI can teach itself to solve new tasks.

Whether humans are able to create strong AI is a philosophical question: While some argue that's not possible (can we be Gods?), others argue that this is the next logical evolutionary step.

Let's see if we can at least mimic strong AI when we let LLMs connect to external systems (internet, money, more energy, etc) and specifically allow themselves to fine-tune or train new NNs in general.

Time will tell.

discuss

order

pdimitar|2 years ago

Well comments like yours are the ones truly deserving discussion on the topic, just so you know my opinion. <3

> This is why I prefer the term "weak AI". Weak AI is specifically trained to solve tasks whereas strong AI can teach itself to solve new tasks.

Yes, we can call it a spectrum, though I'd think it's more like a multi-dimensional space. Wouldn't contest your definition though, it's as valid as all the others really.

And yeah I agree/think that the ultimate general AI is the one that can teach itself new tasks, utilize past experience even if the patterns don't match perfectly, and have some sort of sentience. And let's not forget that it must have actual goals and motivation (otherwise it'll just conclude that the best course of action is to not expend any effort and just put itself in an infinite idle loop).

> Whether humans are able to create strong AI is a philosophical question: While some argue that's not possible (can we be Gods?), others argue that this is the next logical evolutionary step.

IMO people romanticize these topics too much. The true AI will be "born" as a hyper-optimizing recursive machine (and collection of algorithms) and it will eventually get limited by the physical reality it inhabits so it'll self-balance quite fine. It's strange how much spiritual value people put into these things, though I somewhat understand; that'll be the second truly intelligent and sentient "life form" that we will know beside ourselves so some metaphysical hand-waving seems unavoidable and maybe even desirable (in terms of moral correction mechanism, maybe).

> Let's see if we can at least mimic strong AI when we let LLMs connect to external systems (internet, money, more energy, etc) and specifically allow themselves to fine-tune or train new NNs in general.

I have no doubt we'll eventually get there but my feeling is that the current "AI" area is in a local maxima and it won't crawl out of it easily.