(no title)
chundicus | 4 years ago
I've always imagined AGI (perhaps naively) as being achieved by clever usage of ML, plus some utilization of classical/symbolic AI from pre-AI winter days, plus probably some unknown elements.
chundicus | 4 years ago
I've always imagined AGI (perhaps naively) as being achieved by clever usage of ML, plus some utilization of classical/symbolic AI from pre-AI winter days, plus probably some unknown elements.
ddalex|4 years ago
a). an internal feedback loop that evaluates a possible output without actuating it, and self-modifies the parameters if the possible output is not what it's needed
b). the capability (based on a) to model own behaviours without acting on them, and to model other agents behaviours and incorporate that model into the feedback
c). the ability to switch between modelling own behaviour and other agents behaviour intentionally by the model itself - as part of the feedback loop
i.e. what I feel it's totally missing in the self-driving cars today is the capability to model OTHER traffic participants actions and intentions; an experienced and attentive human driver does this all the time, pays attention to the pedestrians on the side if they want to jump in front of the car, pays attention to where other cars are LIKELY to go, pays attention to how the bicyclist that's currently overtaken may fall, even pays attention to random soccer balls flying out of a courtyard because a kid may be chasing that. I am not seeing any driving car trying to model any agent outside its own.
joakleaf|4 years ago
If you are interested in self-driving cars, I can highly recommend their presentation from November 2021:
https://youtu.be/uJWN0K26NxQ?t=1467
For me it felt more convincing than Tesla's (a few months prior);
https://www.youtube.com/watch?v=j0z4FweCy4M
Jasper_|4 years ago
The thing that would convince me AGI is ready would be to play a convincing game of poker. Or join in on a conversation mid-way through, listen to it, and engage with it actively. Show that machines are able to pick up on social cues, understand them, and learn new ones. It's a high bar, yes, but it's in my opinion a prerequisite for a self-driving car that's able to share roadways with other cars, cyclists, and kids playing in the street.
rileyphone|4 years ago
"A robot modeled itself without prior knowledge of physics or its shape and used the self-model to perform tasks and detect self-damage."
arduinomancer|4 years ago
The reasoning is that given enough training data the system would know the pedestrian is going to jump out or the cyclist is going to fall just based on sheer volume of training examples. It would have seen that scenario tons of times in the image data.
Whether that will actually work is the question though
Nasrudith|4 years ago
Biology is glacially slow in comparison and one of the advantages from computing is being fast.
I believe that not modeling it is partially by design as a result of responsibility and blame frameworks. If you depend upon possible actions taken by others to be safe you are reckless. Extrapolating from current motions is more reliable than trying to profile everything. "They are moving towards the street at 3mph and 20 ft away, their vector will intersect with car, brake to avoid collision or accelerate enough to leave intersection zone before they can even reach us" seems a more reliable approach. It isn't like a kid will suddenly teleport into the road.
mrkramer|4 years ago
mindcrime|4 years ago
For what it's worth, this is my view as well. And I don't think it's particularly naive. Plenty of people have researched and/or are researching aspects of how to do this. But how to combine something like a neural network, with it's distributed (and very opaque) representations, with an inference engine that "wants" to work with discrete symbols is non-obvious. Or at least it appears to be, since nobody apparently has figured out how to do it yet - at least not to the level of yielding AGI.
but I've never heard a compelling argument for why pure ML would get us there.
The simplistic argument would be that ML models are, in some sense, trying to replicate "what the brain does" and it stands to reason that if your current toy ANN's (and let's be honest - the largest ANN's built to date are toys compared to the brain) are something like the brain, then in principle if you scale them up to "brain level" (in terms of numbers of neurons and synapses), you should get more intelligence. Now on the other hand, anybody working with ANN's today will tell you that they are at best "biologically inspired" and aren't even close to actually replicating what biological neural networks do. Soo... while people like Geoffrey Hinton have gone on record as saying that "ANN's are all you need" (I'm paraphrasing, and I don't have a citation handy, sorry) I tend to think that in the short term a valid approach is exactly what you suggested. Combine ML and use it for what it's good at (pattern recognition, largely) and use "old fashioned" symbolic AI for the things that it is good at (reasoning / inference / etc.)
Now, to figure out how to actually do that. :-)
space_fountain|4 years ago
edgyquant|4 years ago
mFixman|4 years ago
Playing Chess at a grandmaster level was considered something only a human could do until the 1990s, and now no human has beat the best computer in 17 years while AGI seems further away than ever.
Mark my words: we'll create an AI that can pass the Turing test this decade, but we'll still be as far away from the badly defined general problem as we ever were.
lostmsu|4 years ago
MR4D|4 years ago
[0] - https://en.wikipedia.org/wiki/On_Intelligence
jcims|4 years ago
My brother just became a grandpa and I was watching his grandson navigate the world this past weekend. It's unbelievable how quickly the brain can extrapolate a new relationship between objects/actions/etc and then apply it elsewhere. Minimally you see it in the drinking action applied to all sorts of things, this sort of repetitive clenching/releasing of the fingers to find things to grip without looking, etc etc. Watching mom use a fork and very quickly understand how to grasp and manipulate it. The model of just training everything from exogenous data into a flat network seems like it will hit some asymptotic limit.
rafaelero|4 years ago
Guest42|4 years ago
dekhn|4 years ago
I am a proponent of using a working theory that intelligence is an emergent property and we can in principle create new intelligences in a lab (or ML warehouse) if we provide the proper conditions, but that finding and maintaining those conditions is extremely hard. Some state of the art research today aims to integrate recognition capbilities (image recognititon and object detection/tracking on video, voice extraction from audio, text) with advanced generative models for language and behavior, as well as realtime rendering systems that can create realistic humans.
if we combine those we can make a bot that appears fully interactive, passes all turing tests, convinces typical person it's another person... and still has nothing inside researchers would call "artificial intelligence". It might even solve science problems that we can't without having any spark of creativity or agency. Or maybe when we make a bot with all those properties, some uncanny valley is crossed and out pops something that has objective AGI?
As the wise robot once said, "if you can't tell the difference, does it really matter?". We should forge ahead with building datacenter-scale brains and feed them with data and algorithms, while also maintaining a cadre of research scientists who are attuned to the ethical challenges of doing so, an ops team trained to recognize the early signs of sentience, and an exec team with humanity.
stereolambda|4 years ago
Heuristically, we came to be by a very dumb process of piling up newer generations. If my pet would communicate with me on the level of GPTx, I would be very impressed. That's why nowadays I have some scepticism for the ANN critics' arguments, though think it would be neat if they were right.
The thing that I dislike the most in these discussions is the pervasiveness of the AGI concept and the assumption of a linear scale of intelligence. Again, I can intuitively say that I'm more intelligent than my pet: but to quantify this, we'd need to use something silly like brain size, or qualitative/arbitrary things like "this being can talk". I think that human intelligence is a somewhat random point in a very multi-dimensional space, one that technology may never even have a reason to visit. But people tend to subscribe to the notion that this is the very important "point where AGI happens".
tsimionescu|4 years ago
GPTx is not communicating with anyone. It is generating text that resembles text it had in its training set. The fact that human text is normally a form of communication doesn't make generating quasi-random text communication in itself. GPTx is no more communicating than a printer is when printing out text.
A cat or dog leading you to their empty food bowl is actual communication, and they are capable of much more advanced communication as well (especially dogs). The fact that it doesn't look like written text is not that relevant. They are of course worse than GPTx at producing text, just like they are worse than a printer at writing it on a blank page.
ravi-delia|4 years ago
unknown|4 years ago
[deleted]
rawtxapp|4 years ago
spoonjim|4 years ago
defaultprimate|4 years ago
mbrodersen|4 years ago
mbrodersen|4 years ago