> I am an AI skeptic. I am baffled by anyone who isn’t. I don’t see any path from continuous improvements to the (admittedly impressive) ‘machine learning’ field that leads to a general AI
- I share the skepticism towards any progress towards 'general AI' - I don't think that we're remotely close or even on the right path in any way.
- That doesn't make me a skeptic towards the current state of machine learning though. ML doesn't need to lead to general AI. It's already useful in its current forms. That's good enough. It doesn't need to solve all of humanity's problems to be a great tool.
I think it's important to make this distinction and for some reason it's left implicit or it's purposefully omitted from the article.
Yeah I agree - during undergrad, I spent a few years studying neuroscience, and I was very let down by my first ML/AI course. Compared to what I had learned about the brain, what we called an "ANN" just seemed like such a silly toy.
The more you learn about neurobiology, the more apparent it is that there are so many levels of computation going on - everything from dendritic structure, to cellular metabolism, to epigenetics has an effect on information processing. The idea that we could reach some approximation of "general intelligence" by just scaling up some very large matrix operations just seemed like a complete joke.
However, as you say, that doesn't mean what we've done in ML is not worthwhile and interesting. We might have over-reached thinking ML is ready to drive a car without major fourth-coming advancements, but use-cases like style transfer and DLSS 2 are downright magical. Even if we just made marginal improvements in current ML, I'm sure there is a ton of untapped potential in terms of applying this tech to novel use-cases.
I'm in favor of changing the terminology from AI and ML to something along the lines of 'prediction model' so that the idea of machines 'thinking' is replaced with them 'predicting'. it's just easier for our mushy meat brains to think that AI and ML means that it'll lead to general AI or as I like to call it 'general purpose decision maker'. it's all about the language!
"Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science."
I think this is the accepted model in the philosophy of science since the 1970s. That's why I find this argument about AI so strange, especially when it comes from respected science writers.
The idea that accumulated progress along the current path is insufficient for a breakthrough like AGI is almost obviously true. Your second point is important here. Most researchers aren't concerned with AGI because incremental ML and AI research is interesting and useful in its own right.
We can't predict when the next paradigm shift in AI will occur. So it's a bit absurd to be optimistic or skeptical. When that shift happens we don't know if it will catapult us straight to AGI or be another stepping stone on a potentially infinite series of breakthroughs that never reaches AGI. To think of it any other way is contrary to what we know about how science works. I find it odd how much ink is being spent on this question by journalists.
> I share the skepticism towards any progress towards 'general AI' - I don't think that we're remotely close or even on the right path in any way.
I actually think that AGI is deceptively simple. I don't have a proof, but I have a (rather embryonic, frankly) theory of how is it gonna work.
I believe AGI is an analogue of third Futamura projection, but for (reinforcement) learners and not compilers.
So the first level is you have problem and a learner, and you teach learner to solve the problem. The representation of the problem is implicit in the learner.
The second level is that you have a language, which can describe the problem and its solution, and a (2nd level) learner, and you teach the 2nd level learner to create (1st level) solvers of the problem based on the problem description language. The ability to interpret the problem description language is implicit in the 2nd level learner.
The third level is, you have a general description language that is capable of describing any problem description language, and you teach the 3rd level learner to take a description of the problem description language, and produce 2nd level learners that can use this language to solve problems created in it.
Now, just like in Futamura projections, this is where it stops. You have a "generally intelligent" creature on the 3rd level. You can talk to them on level of how to effectively describe or solve problems (create a specialized language for it) and they will come all the way down with the way to attack (solve) them.
In humans, the 3rd level, general intelligence (AKA "sentience"), evolved eventually from the 2nd level, and it was a creation of the general internal language (which probably co-evolved to be shared). The 2nd level is an internal representation of the world that can be manipulated, but only ever refer to the external world, not itself, so it allows creatures to make conscious plans, but lack the ability to reflect on the planning (and also learning) process itself. The "bicameral mind" is a theory how we acquired 3rd level from the 2nd, and the 3rd level is why "we are strange loops".
Anway, the problem is, the higher you go up the chain, the harder it becomes to create the learner, it's a lot more general problem. But I think the ladder must be, and should be, climbed. I believe that Deepmind (and RL research) has solved the 1st level, is now working on the 2nd level, but they already somewhat dimly see the 3rd level.
> I think it's important to make this distinction and for some reason it's left implicit or it's purposefully omitted from the article
I beg to disagree. They clearly state your opinion at the end of the piece, using the metal-beat analogy. Great things were done by blacksmiths beating metal, but not an ICE
I worked on object detection for several years at one company using traditional methods, predating TensorFlow by a few years. We had a very sophisticated pipeline that had a DSP front end and a classical boundary detection scheme with a little neural net. The very first SSDMobileNet we tried blew away 5 years worth of work with about two weeks of training and tuning.
Other peers of mine work in industrial manufacturing, and classification and segmentation with off the shelf NN's has revolutionized assembly line testing almost overnight.
So yes, DNNs absolutely do some things vastly better than previous technology. Hand's down.
Why I'm Anti-AI: hype
The class of problems addressed by recent developments in NN/DNN software have failed horribly in scaling to even modestly real-world, rational multi-tasking. ADAS level 5 is the poster child. When hype master Elon Musk backs away, that is telling.
We're on the bleeding edge here, IMHO we NEED to try everything. There's no telling which path has fruit. Look at elliptic curves: half a century with no applications, now they are the backbone of the internet. Yes, there will be BS, hype, snake oil, vaporware, but there will also be some amazing tech.
> it's left implicit or it's purposefully omitted from the article
It's explicitly right there in the essay...
> Machine learning has bequeathed us a wealth of automation tools that operate with high degrees of reliability to classify and act on data acquired from the real world. It’s cool!
> Brilliant people have done remarkable things with it.
You seem to be in agreement with the article but don't realize it.
> It doesn't need to solve all of humanity's problems to be a great tool.
As a side note, I'd like to say humanity's own intelligence is actually able to come up with solutions to its problems, we don't need AGI for that. Humanity is unable to implement those solutions for reasons beyond technical. How an AGI would get over those hurdles I have no idea
There's good reason to be skeptical of AI as it is. Here's a couple of reasons
Racial bias in facial recognition: "Error rates up to 34% higher on dark-skinned women than for lighter-skinned males. "Default camera settings are often not optimized to capture darker skin tones, resulting in lower-quality database images of Black Americans" https://sitn.hms.harvard.edu/flash/2020/racial-discriminatio...
What a confused and muddled post, trying to touch on psychology, philosophy, and mathematics, and missing the mark on basically all three. I'm quite bearish on AI/ML, but calling it a "parlor trick" is like calling modern computers a parlor trick. I mean, at the end of the day, they're just very fast abacuses, right? Let's face it: what ML has brought to the forefront -- from self-landing airplanes to self-driving cars, to AI-assisted diagnoses -- is pretty impressive. If you insist on being reductive, sure, I guess it's "merely" statistics.
Bringing up quantitative vs qualitative analysis is just silly, since science has had this problem way before AI. Hume famously described it as the is/ought problem†. And that was a few hundred years ago.
Finally, dropping the mic with "I don't think we're anywhere close to consciousness" is just bizarre. I don't think that any serious academic working in AI/ML has made any arguments that claim machine learning models are "conscious." And Strong AI will probably remain unattainable for a very long time (I'd argue forever). This is not a particularly controversial position.
† Okay, it's not the same thing, but closely related. I suppose the fact–value distinction might be a bit closer.
> We don’t have any consensus on what we meant by “intelligence,” but all the leading definitions include “comprehension,” and statistical inference doesn’t lead to comprehension, even if it sometimes approximates it.
So now the semantic shell game is stuck on defining "comprehension". In the next paragraph he starts to suggest it has something to do with generalization -- but that's a concept around which ML practitioners are constantly innovating in formalizing, and using those formal measures to good effect.
Also, "comprehension" is absent in plenty of definitions of intelligence. Take Oxford's "the ability to acquire and apply knowledge and skills". Huge parts of the world work around notions of intelligence demonstrated through action, not a philosophical abstraction.
I'll never understand the "ML won't make my version of AGI" crowd's view on science in general. "This won't work in ways I refuse to define" isn't scientific criticism, and doesn't show any particular curiosity or interest in advancing the state of the art. It's just a rhetorical pose that seems aimed at building up a platform for the next time there's some AI pratfall to point out.
> what ML has brought to the forefront -- from self-landing airplanes to self-landing cars
I am not aware of any ML in flight controls. Being black box and probabilistic by nature, these things won’t get past industry standards and regulations (at least for a while).
I liked the post. It is clearly aimed at people who think that we are close to achieving AI and that AI "knows best".
It might be obvious for most people here on HN that we are very far away from true artificial intelligence, but most normal people aren’t and the marketing bullshit around calling statistical models "artificial intelligence" paints the wrong picture. This article shows why.
What baffles me is the number of humans who think they are in the personal possession of some super special sacred form of magical and unexplainable intelligence. "AI is just stats" yes, indeed, but so is human intelligence. In many ways, AI from 2010 was already better than human intelligence.
Three remarks:
- The task many people seem to be benchmarking against is not just a measure of general intelligence, but a measure of how well AI is able to emulate human intelligence. That's not wrong, but I do find it amusing. Emulating any system within another generally requires an order of magnitude higher performance.
- The degree to which human intelligence fails catastrophically in each of our lives, on a continuous basis, is way too quickly forgotten. We have a very selective memory indeed. We have absolutely terrible judgment, are super irrational, and pretty reliably make decisions that are against our own interests, whether it's with regard to tobacco use, avoidance of physical exercise, or refusal of life-saving medications or prophylactics. We avoid spending time learning maths and science because it's not cool, and we openly display pride in our anti-intellectual behaviours and attitudes. We're all incredibly stupid by default.
- AI researchers need to work more closely with neuroanatomists. The main thing preventing AI from behaving like a human is the different macro structure of human NNs vs artificial NNs. Our brains aren't random assortments of randomly connected neurons: there's structure in there that explains our patterns of behaviour, and that is lacking in even the most modern AI. We can't expect AI to be human if we don't give it human structures.
This article is mostly a straw man, while still containing some valid ML criticism. I am a ML s(c|k)eptic too, in that popular conceptions of what ML is currently overpromise, often don't even understand what ML actually is, and are often just some layperson's imagination about what "artificial intelligence" might do.
This article is the opposite. He's treating ML as basically a simple supervised architecture that doesn't allow any domain knowledge to be incorporated and simply dead-reckons, making unchecked inferences from what it learned in training. Under these constraints, everything he says is correct. But there is no reason ML has to be used this way, in fact it is extremely irresponsible to do so in many cases. ML as part of a system (whether directly part of the model architecture and learned or imposed by domain knowledge) is possible, and is generally the right way to build an "AI" system.
I think ML has its limitations and will be surprised to see current neural networks evolve into AGI. But I also don't think the engineers working in this space are as out to lunch as the author seems to imply, and would not write off the possibilities of what contemporary ML systems can accomplish based on the flaws pointed out in relation to a very narrow view of what ML is.
> But I also don't think the engineers working in this space are as out to lunch as the author seems to imply.
Are you at all close to this space? It sounds you may be underestimating corporate politics and the lack of rigour and ethical thought with which these systems are applied. The example Cory puts on policing -- and the many other examples you can find in Evgeny Morozov's book or "The End of Trust" -- are solid proof of this.
> This article is mostly a straw man, while still containing some valid ML criticism.
I don't think this is an example of a straw man, given that his audience is readers of Locus, a science fiction magazine. While researchers and practitioners in ML understandably hold a more nuanced, informed view, the position he's arguing against is pretty common among the general public, and certainly common in science fiction.
That's how I felt too. Most of the article is trying to pull us with an emotional attachment (mostly to racist things a computer will do if tasked to do important things). While that criticism is welcome, it's not specifically meaningful towards an argument against AGI. The only part that was seemed to be that statistical inference is not a path to AGI which is somehow backed up by the emotional stuff.
What deep learning seems to step into more and more is time-based statistical inference.
AGI is not:
seeing that a girl has a frown on their face.
seeing that a girl has a frown, because someone said "you look fat"
seeing that a girl has a frown because her boyfriend said you look fat
seeing that Maya has generally been upset with her boyfriend who also most recently told her she is fat.
But keep going and going and going and we might get somewhere. Do we have the computer power to keep going? I don't know.
> Let’s talk about what machine learning is...it analyzes training data to uncover correlations between different phenomena.
The author seems to have missed or excluded reinforcement learning and planning algorithms in this definition.
My criticism of AI criticism in general is that no one admits that at the root of it, we do not understand thinking (or "consciousness"). We are merely the "recipient" or enjoyer of the process, which is opaque. Just as AlphaGo, even if it just a facsimile of a Go player, could beat a human at Go, it is probable that an AI could produce a passable facsimile of thinking at one point. Its mechanisms would be as opaque as human thinking (, even to itself), but the results would be undeniable. AGI is a possibility.
I think he's doing a bit of bait and switch there. Knowing reliably whether arrests are genuinely racist or if winks are flirtatious is superhuman intelligence.
> But the idea that if we just get better at statistical inference, consciousness will fall out of it is wishful thinking.
I'm a mostly disinterested spectator in current AI research, and even I know that it's not all about that. Just google "AI alignment" for an example, and god only knows what's going on in private research.
I think the definition of racism in this context can be simple. If the rate of false positives for blacks is significantly higher than the average across the nation, then it's racism. Significantly higher can mean "one stddev higher".
I found this to be a very succinct, sober analysis of ML ("AI") techno-solutionism. Cory is a great writer and knows how to explain ideas in a simple, no-nonsense way. This article reminded me of Evgeny Morozov's "To Save Everything, Click Here", where you can find many more examples of how focusing on the quantitative aspect of a problem and ignoring the social, qualitative context it around it often goes wrong.
This kind of talk can be steelmanned, but even that version doesn't have reassuring answers to the likes of
> Okay, you’ve all told us that progress won’t be all that fast. But let’s be more concrete and specific. I’d like to know what’s the least impressive accomplishment that you are very confident cannot be done in the next two years.
The first AI winter came after we realized that the AI of the time, the high level logic, reasoning and planning algorithms we had implemented, were useless in the face of the fuzziness of the real world. Basically we had tried to skip straight to modeling our own intellect, without bothering to first model the reptile brain that supplies it with a model of the world on which to operate. Being able to make a plan to ferry a wolf, sheep and cabbage across the river in a tiny boat without any of them getting eaten doesn't help much if you're unable to tell apart a wolf, sheep and cabbage, let alone steer a boat.
That's what makes me excited about our recent advances in ML. Finally, we are getting around to modeling the lower levels of our cognitive system, the fuzzy pattern recognition part that supplies our consciousness with something recognizable to reason about, and gives us learned skills to perform in the world.
We still don't know how to wire all that up. Maybe a single ML model can achieve AGI if it is adaptable enough in its architecture. Maybe a group of specialized ML models need to make up subsystems for a centralized AGI ML-model (like a human's visual and language centers). Maybe we need several middle layers to aggregate and coordinate the submodules before they hook into the central unit. Maybe we can even use the logic, planning or expert system approach from before the AI winter for the central "consciousness" unit. Who knows?
But to me it feels like we've finally got one of the most important building blocks to work with in modern ML. Maybe it's the only one we'll need, maybe it's only a step of the way. But the fact that we have in a handful of years not managed to go from "model a corner of a reptile brain" to "model a full human brain" is no reason to call this a failure or predict another winter just yet. We've got a great new building block, and all we've really done with it so far is basically to prod it with a stick, to see what it can do on its own. Maybe figuring out the next steps toward AGI will be another winter. But the advances we've made with ML have convinced me that we'll get there eventually, and that when we do, ML will be part of it some extent. Frankly I'm super excited just to see people try.
We are paying for the incredible bamboozle that is the phrase "Machine Learning." If we used computerized statistical inference instead and the phrase "machine learning" did not exist the attitude to people from investors to regulators, from customers, vendors, doom sayers and boosters alike would be vastly better taken as whole.
Nearly everyone here knows mostly when seeing AI written or hearing it that it's a total crock. Nearly everyone here knows ML is applied statistics done with a computer but this not common knowledge and it really should be.
Past performance is not indictative of future results across distinct domains.
Within a single problem space (or sub-space) past performance can generalise quite well.
There's a problem with scaling solutions and expecting performance to continue to increase in a continuous exponential manner: growth that we perceive as exponential is often only on a long-life S-Curve.
We've seen this in silicon, where what appears to the layman to have been exponential growth has in fact been a sequence of more limited growth spurts bound by the physical limits of scaling within whatever model of design was active at the time.
The question of where the bounds to the problem domains are, and when new ideas or paradigms are required is much more difficult in AI than it has been in microprocessors.
It's easy enough to formulate the question "how small can this be before the changes in physical characteristics at scale prevent it from working?", if rather more difficult to answer.
AI is so damned steeped in the vagaries of the unknown that I can't even think of the question.
For a short and very non-technical article, this is well written.
The current approach to machine learning is not going to go towards general-purpose AI with steady steps and gradual innovations. Things like GPT-3 seem amazingly general at first. But even it will quickly plateau towards the point where you need a bigger and bigger model, more and more data, and training for smaller and smaller gain.
There need to be several breakthroughs similar to the original Deep Learning breakthrough away from statistical learning. I would say it's 4-7 Turing awards away at a minimum. Some expect less, some more.
>The problems of theory-free statistical inference go far beyond hallucinating faces in the snow. Anyone who’s ever taken a basic stats course knows that “correlation isn’t causation.” For example, maybe the reason cops find more crime in Black neighborhoods because they harass Black people more with pretextual stops and searches that give them the basis to unfairly charge them, a process that leads to many unjust guilty pleas because the system is rigged to railroad people into pleading guilty rather than fighting charges. (...)
Being able to calculate that Inputs a, b, c… z add up to Outcome X with a probability of 75% still won’t tell you if arrest data is racist, whether students will get drunk and breathe on each other, or whether a wink is flirtation of grit in someone’s eye.
Except if information about what we consider racist etc. also passes through the same inference engine (feeding it with information on arbitrary additional meta levels).
So, sure, an AI which is just fed crime stats to make inferrences, can never understand beyond that level.
But an AI which if fed crime stats, plus cultural understanding about such data (e.g. which is fed language, like a baby is, and which is then fed cultural values through osmosis - e.g. news stories, recorded discussions with people, etc).
In the end, it could also be through actual socialization: you make the AI into a portable human-like body (the classic sci-fi robot), and have it feed its learning NN by being around people, same as any other person.
This is a well-written and well-reasoned argument - BUT - I tend toward the materialist philosophy, so the argument doesn't really hold there.
Yes, an ML model that infers B from A might not "understand" what A or B are....yet. But what is it to "understand" anyway? Just a more complex process in a different part of the machine.
If the human brain is just a REALLY large, trained, NN, there's no reason that we won't be able to replicate it given enough computing power.
> I don’t see any path from continuous improvements to the (admittedly impressive) ‘machine learning’ field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.
While I also don't expect that AGI will emerge solely through optimizing statistical inference models, I also don't think "improvements to the machine learning field" consist only of such optimizations. Surely further insights, paradigm shifts, etc., will continue to play a role in advancing AI.
Perhaps it's more a matter of semantics and a bad analogy; "machine learning" seems far more broad a field than "horse-breeding." Horse-breeding is necessarily limited to horses. Machine learning is not limited to a specific algorithm or data model.
Even calling it a "statistical inference tool", while not wrong, is deceptive. What exactly does he or anyone expect or want an AGI to do that can't be understood at some level as "statistical inference"? One might say: "Well, I want it to actually understand or actually be conscious." Why? How would you ever know anyway?
It's worth pointing out that "machine learning" is a specific term of art, not a term for AI in general. It refers very specifically to the type of "convolutional neural networks" that have made a bunch of progress over the past 15-25 years.
The moment you have a paradigm shift, sure, it can be considered "learning done by machines", but it's not "Machine Learning™" anymore.
--
This is why the author put it in quotes; because, since it's a term comprehensible to anybody, it's got this unfortunate side effect where people on the outside of the field take the "plain english" meaning of it rather than realizing it's loaded with some extra specific meaning for the practitioners in the field.
Author makes some interesting parallels to infernal combustion engines not being possible without machine tools.
The Antikythera mechanism was built 1800 years before the first metal lathe. It is a fantastically sophisticated[1] clockwork with dozens of gears, concentric shafts, and brilliant, practiced fabrication. It is not a unique device. It was built by someone who knew what they were doing and had made this thing many times. It is obvious in the same way that you can tell when code was written from the start knowing how the finished product would look.
The device displays the relative positions of stars and planets from their underlying orbits, and was built with bronze hammers and some small fragments of steel. All that to say, you can do incredible things with practice, care, and tools that are thousands of years too primitive.
> It’s not sorcery, it’s “magic” – in the sense of being a parlor trick, something that seems baffling until you learn the underlying method, whereupon it becomes banal.
I think part of the problem is the belief that human or animal intelligence is somehow more mystical.
People who think like this will see an ML implementation solve a problem better and/or faster than a human and counter "well, it's just using statistical inference or pattern recognition" and my response is "so?" Humans use the same processes and parlor tricks to understand and replay things.
Where humans excel is in generalizing knowledge. We can apply bits and pieces of our previous parlor tricks to speed up comprehension in other problem spaces.
But none of it is magic. We're all simple machines.
Ooof. Premed dropout here, so admittedly not an expert in human biology but this is a wild statement. A neuron is simple in the same way a transistor is simply a silicon sandwich doped with metals.
A parlor trick is something that once you understand, is straightforward to implement on your own. Are you arguing that anyone now or in the foreseeable future could simply recreate the abilities of a human? If so, what evidence could you show me to support that?
We are not "simple machines" we are the result of 3.7 billion years of evolution. We are the most complex known thing in the universe. We are far more complicated than anything we can hope to make in the forseeable future, if ever.
Unfortunately it’s pretty clear from the article that Cory does not have much familiarity with the research going on in the field of machine learning, and is creating a straw man. Quite a lot of work is being done on causal inference, out-of-distribution generalization, fairness, etc. Just because that is not the focus of the big sexy AI posts from Google et al does not mean that the work isn’t being done. I’d also point out that humans can infer causality for simple systems, but for any sufficiently complex system we also can’t reason causally. But that does not mean we can’t infer useful properties and make informed, reasonable decisions.
I’d also point out that not all models are “theory-free”, as he describes it. I specifically do work in areas where we combine “theory” and machine learning, and it works very well.
And finally, his point about comprehension does not really fly for me. There is no magical comprehension circuit in our brain. It’s all done via biological processes we can study and emulate. Will that end up being a scaled up version of current neural nets? Will it need to arise from embodied cognition in robots? Will it be something else? I don’t know, but it’s certainly not magic, and we’ll get there eventually. Whether that’s 10 years or 1000, who knows.
Are current paradigms going to lead to AGI? Frankly, I’d just be guessing if I even tried to answer that. My gut instinct is no, but again, that’s just a guess. Can current methods evolve into better constrained systems with more generalizable results and measurable fairness? Absolutely.
I don't get why this article conflates machine learning progress and racism. Machine learning is not inherently racist though it may be implemented, intentionally or unintentionally, to produce racist results. It's much easier to solve racism in machine learning models though then in humans and easier to test to confirm that you have corrected it.
There are a whole slew of "advanced AI programs" out there that tells managers who to fire and who to keep, tells judges if someone should go to jail or not, etc.
There are a lot of systems out there where peoples lives are changed forever "because the machine said so".
I'd argue that since machine learning learns only from it's data (produced by it's human creator), it becomes a great tool for baking in unconcious biases in a completely opaque system and amplifying those biases.
Much easier to tell if a human seems to be biased than if the dataset fed to an AI algorithm is biased.
[+] [-] jstx1|4 years ago|reply
- I share the skepticism towards any progress towards 'general AI' - I don't think that we're remotely close or even on the right path in any way.
- That doesn't make me a skeptic towards the current state of machine learning though. ML doesn't need to lead to general AI. It's already useful in its current forms. That's good enough. It doesn't need to solve all of humanity's problems to be a great tool.
I think it's important to make this distinction and for some reason it's left implicit or it's purposefully omitted from the article.
[+] [-] skohan|4 years ago|reply
The more you learn about neurobiology, the more apparent it is that there are so many levels of computation going on - everything from dendritic structure, to cellular metabolism, to epigenetics has an effect on information processing. The idea that we could reach some approximation of "general intelligence" by just scaling up some very large matrix operations just seemed like a complete joke.
However, as you say, that doesn't mean what we've done in ML is not worthwhile and interesting. We might have over-reached thinking ML is ready to drive a car without major fourth-coming advancements, but use-cases like style transfer and DLSS 2 are downright magical. Even if we just made marginal improvements in current ML, I'm sure there is a ton of untapped potential in terms of applying this tech to novel use-cases.
[+] [-] shreyshnaccount|4 years ago|reply
[+] [-] bhntr3|4 years ago|reply
> I share the skepticism towards any progress towards 'general AI' - I don't think that we're remotely close or even on the right path in any way.
This isn't how science works though. Quoting the wikipedia page for Thomas Kuhn's "The Structure of Scientific Revolutions" (https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...):
"Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science."
I think this is the accepted model in the philosophy of science since the 1970s. That's why I find this argument about AI so strange, especially when it comes from respected science writers.
The idea that accumulated progress along the current path is insufficient for a breakthrough like AGI is almost obviously true. Your second point is important here. Most researchers aren't concerned with AGI because incremental ML and AI research is interesting and useful in its own right.
We can't predict when the next paradigm shift in AI will occur. So it's a bit absurd to be optimistic or skeptical. When that shift happens we don't know if it will catapult us straight to AGI or be another stepping stone on a potentially infinite series of breakthroughs that never reaches AGI. To think of it any other way is contrary to what we know about how science works. I find it odd how much ink is being spent on this question by journalists.
[+] [-] js8|4 years ago|reply
I actually think that AGI is deceptively simple. I don't have a proof, but I have a (rather embryonic, frankly) theory of how is it gonna work.
I believe AGI is an analogue of third Futamura projection, but for (reinforcement) learners and not compilers.
So the first level is you have problem and a learner, and you teach learner to solve the problem. The representation of the problem is implicit in the learner.
The second level is that you have a language, which can describe the problem and its solution, and a (2nd level) learner, and you teach the 2nd level learner to create (1st level) solvers of the problem based on the problem description language. The ability to interpret the problem description language is implicit in the 2nd level learner.
The third level is, you have a general description language that is capable of describing any problem description language, and you teach the 3rd level learner to take a description of the problem description language, and produce 2nd level learners that can use this language to solve problems created in it.
Now, just like in Futamura projections, this is where it stops. You have a "generally intelligent" creature on the 3rd level. You can talk to them on level of how to effectively describe or solve problems (create a specialized language for it) and they will come all the way down with the way to attack (solve) them.
In humans, the 3rd level, general intelligence (AKA "sentience"), evolved eventually from the 2nd level, and it was a creation of the general internal language (which probably co-evolved to be shared). The 2nd level is an internal representation of the world that can be manipulated, but only ever refer to the external world, not itself, so it allows creatures to make conscious plans, but lack the ability to reflect on the planning (and also learning) process itself. The "bicameral mind" is a theory how we acquired 3rd level from the 2nd, and the 3rd level is why "we are strange loops".
Anway, the problem is, the higher you go up the chain, the harder it becomes to create the learner, it's a lot more general problem. But I think the ladder must be, and should be, climbed. I believe that Deepmind (and RL research) has solved the 1st level, is now working on the 2nd level, but they already somewhat dimly see the 3rd level.
[+] [-] darkwater|4 years ago|reply
I beg to disagree. They clearly state your opinion at the end of the piece, using the metal-beat analogy. Great things were done by blacksmiths beating metal, but not an ICE
[+] [-] SavantIdiot|4 years ago|reply
Why I'm pro-AI: Neural nets.
I worked on object detection for several years at one company using traditional methods, predating TensorFlow by a few years. We had a very sophisticated pipeline that had a DSP front end and a classical boundary detection scheme with a little neural net. The very first SSDMobileNet we tried blew away 5 years worth of work with about two weeks of training and tuning.
Other peers of mine work in industrial manufacturing, and classification and segmentation with off the shelf NN's has revolutionized assembly line testing almost overnight.
So yes, DNNs absolutely do some things vastly better than previous technology. Hand's down.
Why I'm Anti-AI: hype
The class of problems addressed by recent developments in NN/DNN software have failed horribly in scaling to even modestly real-world, rational multi-tasking. ADAS level 5 is the poster child. When hype master Elon Musk backs away, that is telling.
We're on the bleeding edge here, IMHO we NEED to try everything. There's no telling which path has fruit. Look at elliptic curves: half a century with no applications, now they are the backbone of the internet. Yes, there will be BS, hype, snake oil, vaporware, but there will also be some amazing tech.
I say be patient and skeptical.
[+] [-] jmull|4 years ago|reply
It's explicitly right there in the essay...
> Machine learning has bequeathed us a wealth of automation tools that operate with high degrees of reliability to classify and act on data acquired from the real world. It’s cool!
> Brilliant people have done remarkable things with it.
You seem to be in agreement with the article but don't realize it.
[+] [-] emrah|4 years ago|reply
As a side note, I'd like to say humanity's own intelligence is actually able to come up with solutions to its problems, we don't need AGI for that. Humanity is unable to implement those solutions for reasons beyond technical. How an AGI would get over those hurdles I have no idea
[+] [-] cratermoon|4 years ago|reply
Racial bias in facial recognition: "Error rates up to 34% higher on dark-skinned women than for lighter-skinned males. "Default camera settings are often not optimized to capture darker skin tones, resulting in lower-quality database images of Black Americans" https://sitn.hms.harvard.edu/flash/2020/racial-discriminatio...
Chicago’s “Heat List” predicts arrests, doesn’t protect people or deter crime: https://mathbabe.org/2016/08/18/chicagos-heat-list-predicts-...
[+] [-] wffurr|4 years ago|reply
[+] [-] dvt|4 years ago|reply
Bringing up quantitative vs qualitative analysis is just silly, since science has had this problem way before AI. Hume famously described it as the is/ought problem†. And that was a few hundred years ago.
Finally, dropping the mic with "I don't think we're anywhere close to consciousness" is just bizarre. I don't think that any serious academic working in AI/ML has made any arguments that claim machine learning models are "conscious." And Strong AI will probably remain unattainable for a very long time (I'd argue forever). This is not a particularly controversial position.
† Okay, it's not the same thing, but closely related. I suppose the fact–value distinction might be a bit closer.
[+] [-] evrydayhustling|4 years ago|reply
> We don’t have any consensus on what we meant by “intelligence,” but all the leading definitions include “comprehension,” and statistical inference doesn’t lead to comprehension, even if it sometimes approximates it.
So now the semantic shell game is stuck on defining "comprehension". In the next paragraph he starts to suggest it has something to do with generalization -- but that's a concept around which ML practitioners are constantly innovating in formalizing, and using those formal measures to good effect.
Also, "comprehension" is absent in plenty of definitions of intelligence. Take Oxford's "the ability to acquire and apply knowledge and skills". Huge parts of the world work around notions of intelligence demonstrated through action, not a philosophical abstraction.
I'll never understand the "ML won't make my version of AGI" crowd's view on science in general. "This won't work in ways I refuse to define" isn't scientific criticism, and doesn't show any particular curiosity or interest in advancing the state of the art. It's just a rhetorical pose that seems aimed at building up a platform for the next time there's some AI pratfall to point out.
[+] [-] flyinglizard|4 years ago|reply
I am not aware of any ML in flight controls. Being black box and probabilistic by nature, these things won’t get past industry standards and regulations (at least for a while).
[+] [-] WA|4 years ago|reply
It might be obvious for most people here on HN that we are very far away from true artificial intelligence, but most normal people aren’t and the marketing bullshit around calling statistical models "artificial intelligence" paints the wrong picture. This article shows why.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] epgui|4 years ago|reply
Three remarks:
- The task many people seem to be benchmarking against is not just a measure of general intelligence, but a measure of how well AI is able to emulate human intelligence. That's not wrong, but I do find it amusing. Emulating any system within another generally requires an order of magnitude higher performance.
- The degree to which human intelligence fails catastrophically in each of our lives, on a continuous basis, is way too quickly forgotten. We have a very selective memory indeed. We have absolutely terrible judgment, are super irrational, and pretty reliably make decisions that are against our own interests, whether it's with regard to tobacco use, avoidance of physical exercise, or refusal of life-saving medications or prophylactics. We avoid spending time learning maths and science because it's not cool, and we openly display pride in our anti-intellectual behaviours and attitudes. We're all incredibly stupid by default.
- AI researchers need to work more closely with neuroanatomists. The main thing preventing AI from behaving like a human is the different macro structure of human NNs vs artificial NNs. Our brains aren't random assortments of randomly connected neurons: there's structure in there that explains our patterns of behaviour, and that is lacking in even the most modern AI. We can't expect AI to be human if we don't give it human structures.
[+] [-] version_five|4 years ago|reply
This article is the opposite. He's treating ML as basically a simple supervised architecture that doesn't allow any domain knowledge to be incorporated and simply dead-reckons, making unchecked inferences from what it learned in training. Under these constraints, everything he says is correct. But there is no reason ML has to be used this way, in fact it is extremely irresponsible to do so in many cases. ML as part of a system (whether directly part of the model architecture and learned or imposed by domain knowledge) is possible, and is generally the right way to build an "AI" system.
I think ML has its limitations and will be surprised to see current neural networks evolve into AGI. But I also don't think the engineers working in this space are as out to lunch as the author seems to imply, and would not write off the possibilities of what contemporary ML systems can accomplish based on the flaws pointed out in relation to a very narrow view of what ML is.
[+] [-] 3gg|4 years ago|reply
Are you at all close to this space? It sounds you may be underestimating corporate politics and the lack of rigour and ethical thought with which these systems are applied. The example Cory puts on policing -- and the many other examples you can find in Evgeny Morozov's book or "The End of Trust" -- are solid proof of this.
[+] [-] karaterobot|4 years ago|reply
I don't think this is an example of a straw man, given that his audience is readers of Locus, a science fiction magazine. While researchers and practitioners in ML understandably hold a more nuanced, informed view, the position he's arguing against is pretty common among the general public, and certainly common in science fiction.
[+] [-] coding123|4 years ago|reply
What deep learning seems to step into more and more is time-based statistical inference.
AGI is not:
seeing that a girl has a frown on their face.
seeing that a girl has a frown, because someone said "you look fat"
seeing that a girl has a frown because her boyfriend said you look fat
seeing that Maya has generally been upset with her boyfriend who also most recently told her she is fat.
But keep going and going and going and we might get somewhere. Do we have the computer power to keep going? I don't know.
[+] [-] vijucat|4 years ago|reply
The author seems to have missed or excluded reinforcement learning and planning algorithms in this definition.
My criticism of AI criticism in general is that no one admits that at the root of it, we do not understand thinking (or "consciousness"). We are merely the "recipient" or enjoyer of the process, which is opaque. Just as AlphaGo, even if it just a facsimile of a Go player, could beat a human at Go, it is probable that an AI could produce a passable facsimile of thinking at one point. Its mechanisms would be as opaque as human thinking (, even to itself), but the results would be undeniable. AGI is a possibility.
[+] [-] radu_floricica|4 years ago|reply
> But the idea that if we just get better at statistical inference, consciousness will fall out of it is wishful thinking.
I'm a mostly disinterested spectator in current AI research, and even I know that it's not all about that. Just google "AI alignment" for an example, and god only knows what's going on in private research.
[+] [-] akomtu|4 years ago|reply
[+] [-] 3gg|4 years ago|reply
https://bookshop.org/books/to-save-everything-click-here-the...
[+] [-] abecedarius|4 years ago|reply
> Okay, you’ve all told us that progress won’t be all that fast. But let’s be more concrete and specific. I’d like to know what’s the least impressive accomplishment that you are very confident cannot be done in the next two years.
(from https://intelligence.org/2017/10/13/fire-alarm/)
Just today I was rather astonished by https://moultano.wordpress.com/2021/07/20/tour-of-the-sacred... -- try digging up something comparable from mid-2019.
[+] [-] m12k|4 years ago|reply
That's what makes me excited about our recent advances in ML. Finally, we are getting around to modeling the lower levels of our cognitive system, the fuzzy pattern recognition part that supplies our consciousness with something recognizable to reason about, and gives us learned skills to perform in the world.
We still don't know how to wire all that up. Maybe a single ML model can achieve AGI if it is adaptable enough in its architecture. Maybe a group of specialized ML models need to make up subsystems for a centralized AGI ML-model (like a human's visual and language centers). Maybe we need several middle layers to aggregate and coordinate the submodules before they hook into the central unit. Maybe we can even use the logic, planning or expert system approach from before the AI winter for the central "consciousness" unit. Who knows?
But to me it feels like we've finally got one of the most important building blocks to work with in modern ML. Maybe it's the only one we'll need, maybe it's only a step of the way. But the fact that we have in a handful of years not managed to go from "model a corner of a reptile brain" to "model a full human brain" is no reason to call this a failure or predict another winter just yet. We've got a great new building block, and all we've really done with it so far is basically to prod it with a stick, to see what it can do on its own. Maybe figuring out the next steps toward AGI will be another winter. But the advances we've made with ML have convinced me that we'll get there eventually, and that when we do, ML will be part of it some extent. Frankly I'm super excited just to see people try.
[+] [-] harry8|4 years ago|reply
Nearly everyone here knows mostly when seeing AI written or hearing it that it's a total crock. Nearly everyone here knows ML is applied statistics done with a computer but this not common knowledge and it really should be.
[+] [-] yarg|4 years ago|reply
Within a single problem space (or sub-space) past performance can generalise quite well.
There's a problem with scaling solutions and expecting performance to continue to increase in a continuous exponential manner: growth that we perceive as exponential is often only on a long-life S-Curve.
We've seen this in silicon, where what appears to the layman to have been exponential growth has in fact been a sequence of more limited growth spurts bound by the physical limits of scaling within whatever model of design was active at the time.
The question of where the bounds to the problem domains are, and when new ideas or paradigms are required is much more difficult in AI than it has been in microprocessors.
It's easy enough to formulate the question "how small can this be before the changes in physical characteristics at scale prevent it from working?", if rather more difficult to answer.
AI is so damned steeped in the vagaries of the unknown that I can't even think of the question.
[+] [-] MAXPOOL|4 years ago|reply
The current approach to machine learning is not going to go towards general-purpose AI with steady steps and gradual innovations. Things like GPT-3 seem amazingly general at first. But even it will quickly plateau towards the point where you need a bigger and bigger model, more and more data, and training for smaller and smaller gain.
There need to be several breakthroughs similar to the original Deep Learning breakthrough away from statistical learning. I would say it's 4-7 Turing awards away at a minimum. Some expect less, some more.
[+] [-] coldtea|4 years ago|reply
Being able to calculate that Inputs a, b, c… z add up to Outcome X with a probability of 75% still won’t tell you if arrest data is racist, whether students will get drunk and breathe on each other, or whether a wink is flirtation of grit in someone’s eye.
Except if information about what we consider racist etc. also passes through the same inference engine (feeding it with information on arbitrary additional meta levels).
So, sure, an AI which is just fed crime stats to make inferrences, can never understand beyond that level.
But an AI which if fed crime stats, plus cultural understanding about such data (e.g. which is fed language, like a baby is, and which is then fed cultural values through osmosis - e.g. news stories, recorded discussions with people, etc).
In the end, it could also be through actual socialization: you make the AI into a portable human-like body (the classic sci-fi robot), and have it feed its learning NN by being around people, same as any other person.
[+] [-] nlh|4 years ago|reply
Yes, an ML model that infers B from A might not "understand" what A or B are....yet. But what is it to "understand" anyway? Just a more complex process in a different part of the machine.
If the human brain is just a REALLY large, trained, NN, there's no reason that we won't be able to replicate it given enough computing power.
[+] [-] shannifin|4 years ago|reply
While I also don't expect that AGI will emerge solely through optimizing statistical inference models, I also don't think "improvements to the machine learning field" consist only of such optimizations. Surely further insights, paradigm shifts, etc., will continue to play a role in advancing AI.
Perhaps it's more a matter of semantics and a bad analogy; "machine learning" seems far more broad a field than "horse-breeding." Horse-breeding is necessarily limited to horses. Machine learning is not limited to a specific algorithm or data model.
Even calling it a "statistical inference tool", while not wrong, is deceptive. What exactly does he or anyone expect or want an AGI to do that can't be understood at some level as "statistical inference"? One might say: "Well, I want it to actually understand or actually be conscious." Why? How would you ever know anyway?
[+] [-] Jetrel|4 years ago|reply
The moment you have a paradigm shift, sure, it can be considered "learning done by machines", but it's not "Machine Learning™" anymore.
--
This is why the author put it in quotes; because, since it's a term comprehensible to anybody, it's got this unfortunate side effect where people on the outside of the field take the "plain english" meaning of it rather than realizing it's loaded with some extra specific meaning for the practitioners in the field.
[+] [-] mirekrusin|4 years ago|reply
[+] [-] hwillis|4 years ago|reply
The Antikythera mechanism was built 1800 years before the first metal lathe. It is a fantastically sophisticated[1] clockwork with dozens of gears, concentric shafts, and brilliant, practiced fabrication. It is not a unique device. It was built by someone who knew what they were doing and had made this thing many times. It is obvious in the same way that you can tell when code was written from the start knowing how the finished product would look.
The device displays the relative positions of stars and planets from their underlying orbits, and was built with bronze hammers and some small fragments of steel. All that to say, you can do incredible things with practice, care, and tools that are thousands of years too primitive.
[1]: https://en.wikipedia.org/wiki/File:AntikytheraMechanismSchem...
[+] [-] nkozyra|4 years ago|reply
I think part of the problem is the belief that human or animal intelligence is somehow more mystical.
People who think like this will see an ML implementation solve a problem better and/or faster than a human and counter "well, it's just using statistical inference or pattern recognition" and my response is "so?" Humans use the same processes and parlor tricks to understand and replay things.
Where humans excel is in generalizing knowledge. We can apply bits and pieces of our previous parlor tricks to speed up comprehension in other problem spaces.
But none of it is magic. We're all simple machines.
[+] [-] xnyan|4 years ago|reply
Ooof. Premed dropout here, so admittedly not an expert in human biology but this is a wild statement. A neuron is simple in the same way a transistor is simply a silicon sandwich doped with metals.
A parlor trick is something that once you understand, is straightforward to implement on your own. Are you arguing that anyone now or in the foreseeable future could simply recreate the abilities of a human? If so, what evidence could you show me to support that?
[+] [-] jmull|4 years ago|reply
Great. Prove it. Build the simple machine that acts as a human does. Should be simple, right?
Personally, I don't think there's any magic. But it's not "simple" either.
[+] [-] heavyset_go|4 years ago|reply
[1] https://aeon.co/essays/your-brain-does-not-process-informati...
[+] [-] staticman2|4 years ago|reply
[+] [-] Gatsky|4 years ago|reply
[+] [-] atty|4 years ago|reply
I’d also point out that not all models are “theory-free”, as he describes it. I specifically do work in areas where we combine “theory” and machine learning, and it works very well.
And finally, his point about comprehension does not really fly for me. There is no magical comprehension circuit in our brain. It’s all done via biological processes we can study and emulate. Will that end up being a scaled up version of current neural nets? Will it need to arise from embodied cognition in robots? Will it be something else? I don’t know, but it’s certainly not magic, and we’ll get there eventually. Whether that’s 10 years or 1000, who knows.
Are current paradigms going to lead to AGI? Frankly, I’d just be guessing if I even tried to answer that. My gut instinct is no, but again, that’s just a guess. Can current methods evolve into better constrained systems with more generalizable results and measurable fairness? Absolutely.
[+] [-] ALittleLight|4 years ago|reply
[+] [-] catears|4 years ago|reply
There are a lot of systems out there where peoples lives are changed forever "because the machine said so".
I'd argue that since machine learning learns only from it's data (produced by it's human creator), it becomes a great tool for baking in unconcious biases in a completely opaque system and amplifying those biases.
Much easier to tell if a human seems to be biased than if the dataset fed to an AI algorithm is biased.