top | item 43343857

(no title)

teqsun | 11 months ago

This is the main reason why I have skepticism towards any claims of imminent AGI.

discuss

order

mellosouls|11 months ago

I'm also sceptical but to be fair, we don't need to understand how the brain works for AGI, that's just one (obvious) path.

southernplaces7|11 months ago

Since we don't yet have anything remotely like AGI and at the same time, don't even really know how the brain works or what consciousness is aside from being aware that we feel it, you and nobody else really know if our path to consciousness is just one of many. For all we know it might be the only one. There could be some very big unknown unknowns in those waters.

JackFr|11 months ago

Who knows? Maybe it’ll turn out like flying and birds.

Studying birds gave us some data, but mimicking them wasn’t what got us jumbo jets.

srveale|11 months ago

Not trying to be sassy but what definition of AGI are you using? I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks." Depending on which tasks you include and what percentage of humans you need to beat, we could be already there or maybe never will be. Several of these tests [1] have been passed, some appear reasonably tractable. Like if Boston Dynamics cared about the Coffee Test I bet they could do it this year.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...

calepayson|11 months ago

> I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks."

I think you're pointing out a bit of a chicken vs. the egg situation here.

We have no idea how intelligence works and I expect this will be the case until we create it artificially. Because we have no idea how it works, we put out a variety of metrics that don't measure intelligence but approximate something that only an intelligent thing could do (we think). Then engineers optimize their ML systems for that task, we blow by the metric, and everyone is left feeling a bit disappointed by the fact that it still doesn't feel intelligent.

Neuroscience has plenty of theories for how the brain works but lacks the ability to validate them. It's incredibly difficult to look into a working brain (not to mention deeply unethical) with the necessary spatial and temporal resolution.

I suspect we'll solve the chicken vs. egg situation when someone builds an architecture around a neuroscience theory and it feels right or neuroscientists are able to find evidence for some specific ML architecture within the brain.