So it seems that OpenAI realized they needed more compute power than they could afford, so they started a for-profit arm that could take outside investment from Microsoft to cover those costs.
This piece suggests that they have since focused (at least partially) on creating profitable products/services, because they need to show Microsoft that this investment was worthwhile.
Does anyone with more context know if this is accurate, and if so, why they changed their approach/focus? What are they working on and is GAI still a goal?
Person doing PhD in AI here (Ive seen all of OpenAI's research, been to their office couple times, know some people there) - tbh the piece was a pretty good summary of a lot of quite common somewhat negative takes on OpenAI within research community (such as that they largely do research based on scaling up known ideas, have at times hyped up their work beyond merit, changed their tune to be for profit which is weird given they want to work in the public interest, and that despite calling themselves OpenAI they publish and open source code much less frequently than most labs -- and with profit incentive they will likely publish and open source even less). The original article also presented the positive side (OpenAI is a pretty daring endeavor to try to get AGI by scaling up known techniques as they are, and people there do seem to have their heart in the right place) .
>they needed more compute power than they could afford
I wonder why this problem doesn't get much more attention in the field. It seems like models to solve even extremely domain specific, limited tasks already are running into economic / computional constraints.
Given that openAI wanted to be a place that develops general, artifical intelligence without having so much short term commercial focus I wonder why there is not more research going into approaches that are less computationally intensive. Which IMO is in itself a metric for intelligence
The original piece itself was flawed. Like many if not most tech companies, reporters or other strangers don’t get unfiltered access to the company. There’s reasons why (unpublished work that may be reported in a bad light).
OpenAI has made huge initiatives in bringing in diversity and really opening their work (deepmind rarely ever does) and I think that profiting in the way they set it up only ensures they can do bigger and better research.
Can you elaborate, you think it was flawed because the reporter didn't get unfiltered access? The piece is still based on tons of interviews and it seems to me that reporter is pretty aware of conversations going on within AI research (she attends our conferences and the like), so I thought it was pretty much a decent summary of common criticisms and positive aspects of OpenAI (as per my other comment on here).
Amusingly, the transcript seems to have been generated by an "AI" tool (https://www.snackable.ai) and gets things wrong just enough to make it very annoying to read.
>"So there were two main theories that came out of this initial founding of the field. One theory was humans are intelligent because we can learn. So if we can replicate the ability to learn in machines, then we can create machines that have human intelligence. And the other theory was humans are intelligent because we have a lot of knowledge. So if we can encode all of our knowledge into machines, then it will have human intelligence. And so these two different directions have kind of defined the entire trajectory of the field. Almost everything that we hear today is actually from this learning branch and it’s called machine learning or deep learning more recently."
Is there still development of the other branch, the "encode all of our knowledge into machines, then it will have human intelligence" branch then? If so what is the branch of AI called then?
I think the "two theories" are meant to be machine learning and knowledge
representation and reasoning (KRR).
Knowledge representation and reasoning is one of the main fields of AI
research, on the same broad level as machine learning or robotics and with its
own journals and conferences (KR 2020 will be held in Rhodes, Greece in
September). It enjoys much less recognition than machine learning in software
development circles because it doesn't receive such broad coverage in the lay
press as machine learning does, but it's an active area of reserach. Google's
Knowledge Graph is probably the best known example of appplications of the
techniques that have originated in research from that field.
I don't really know why the author says that machine learning and KRR are "the
two main theories" in the field. Perhaps she has access to historical
information that I ignore. She says, a little earlier than the passage you
quote that "[AI] was started 70 years ago", which must mean the workshop at
Dartmouth College in 1956, where the term "Artificial Intelligence" was first
introduced (by John McCarthy, perhaps more recognisable as the creator of Lisp
to a programmer audience).
There's sure been many binarily-opposed "camps" in AI, like the symbolicists
vs the connectionists, or the "scruffies" vs the "neats" and so on. While I
recognise the "machine learning vs knowledge representation" as one of the
classic dichotomies, I don't really think it's such an ancient and fundamental
dichotomy as the interviewee makes it sound.
I wonder if the interviewee is mixing up the "ML vs KRR" distinction with a
more subtle distinction between different forms of machine learning. I'm
thinking of Alan Turing's original description of a "learning machine" from
the classic 1950 Mind paper ("Computing machinery and intelligence", where he
introduced the "imitation game"). Turing's learning machine would learn
incrementally, from a small original knowledge base and from human instruction
and from contact with the world, whereas today's machine learning tries to
learn everything from scratch, in an end-to-end, no-human-in-the-loop,
approach. This distinction, "incremental vs all-at-once learning" seems to fit
the interviewee's description of the "two main theories" better.
There's a paper, "The child machine vs the world brain", by the Australian AI
scientist Claud Sammut, that goes into some detail in this distinction, based
on Turing's paper and later developments in data mining and big data:
I recommend reading at least its introduction and then digging in to the
references if you're interested in the history of AI in general and machine
learning in particular and different ideas on those subjects that have been
explored and abandoned over the years.
>"Pursuing a G.I., particularly with a long term view, was the central mission of open A.I.. And yeah, there was the traditional Silicon Valley talk of changing the world, but also this sense that if HDI was done wrong, it could have very scary consequences."
If the concern was truly avoiding AGI was done wrong which presumably includes its development being in the hands of a select few tech giants. Wouldn't it be better to simply wind the operation down rather take money one of those few tech giants leading in AI development then running a company with motives that are odds with each other?
Just off the top of my head doesn't it seem that Microsoft with its new billion dollar investment now stand to benefit from that first billion dollars of investment made to the non-profit OpenAI more so than anybody else?
The reporter was invited to do a piece on them, and while visiting had trouble reconciling their secrecy with their ethos of openness. She was not allowed to interact with the actual researchers where they were doing their work, and her lunch was pushed out of the building so she couldn’t overhear their all-hands meeting. (My take is that their openness extended to the curated fruits of research, but their process itself was guarded from any communication channel they couldn't control i.e. the reporter).
This seems related to the second part, where they discuss the pressures toward profit, from strings attached to corporate investments, which they suggest would be different under traditional long-term gov investments. And they talked about the paradox of adding a for-profit branch to a non-profit org, without resolution.
I’m a bit unsettled recently when listening to podcasts and stories like this that seem to end on a note of “shrug, capitalism, isn’t this an interesting problem?”. I’d be more encouraged to see folks talking about post–game-theoretic social structures that can categorically solve for these issues, that can allow us to transition out of capitalistic dynamics rather than trying to fight them in order to get work done. This seems to be the rallying call of the nebulous ideas behind “game~b”. Wondering if anyone here has been seeing that yet.
Why does this wave of coverage speak about OpenAI as if a postmortem?
Maybe it’s just me but companies are not public benefit enterprises, even if structured in some way as a not for profit.
This wave of coverage and the dialog around it seems to come from the view that OpenAI somehow owes the world something. When in fact it only owes its stakeholders none of whom are reporters.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
[+] [-] jamestimmins|6 years ago|reply
This piece suggests that they have since focused (at least partially) on creating profitable products/services, because they need to show Microsoft that this investment was worthwhile.
Does anyone with more context know if this is accurate, and if so, why they changed their approach/focus? What are they working on and is GAI still a goal?
[+] [-] andreyk|6 years ago|reply
[+] [-] Barrin92|6 years ago|reply
I wonder why this problem doesn't get much more attention in the field. It seems like models to solve even extremely domain specific, limited tasks already are running into economic / computional constraints.
Given that openAI wanted to be a place that develops general, artifical intelligence without having so much short term commercial focus I wonder why there is not more research going into approaches that are less computationally intensive. Which IMO is in itself a metric for intelligence
[+] [-] sdan|6 years ago|reply
OpenAI has made huge initiatives in bringing in diversity and really opening their work (deepmind rarely ever does) and I think that profiting in the way they set it up only ensures they can do bigger and better research.
[+] [-] andreyk|6 years ago|reply
[+] [-] FartyMcFarter|6 years ago|reply
In what way?
[+] [-] mindgam3|6 years ago|reply
[+] [-] jonas21|6 years ago|reply
[+] [-] CKN23-ARIN|6 years ago|reply
[+] [-] bogomipz|6 years ago|reply
>"So there were two main theories that came out of this initial founding of the field. One theory was humans are intelligent because we can learn. So if we can replicate the ability to learn in machines, then we can create machines that have human intelligence. And the other theory was humans are intelligent because we have a lot of knowledge. So if we can encode all of our knowledge into machines, then it will have human intelligence. And so these two different directions have kind of defined the entire trajectory of the field. Almost everything that we hear today is actually from this learning branch and it’s called machine learning or deep learning more recently."
Is there still development of the other branch, the "encode all of our knowledge into machines, then it will have human intelligence" branch then? If so what is the branch of AI called then?
[+] [-] YeGoblynQueenne|6 years ago|reply
Knowledge representation and reasoning is one of the main fields of AI research, on the same broad level as machine learning or robotics and with its own journals and conferences (KR 2020 will be held in Rhodes, Greece in September). It enjoys much less recognition than machine learning in software development circles because it doesn't receive such broad coverage in the lay press as machine learning does, but it's an active area of reserach. Google's Knowledge Graph is probably the best known example of appplications of the techniques that have originated in research from that field.
I don't really know why the author says that machine learning and KRR are "the two main theories" in the field. Perhaps she has access to historical information that I ignore. She says, a little earlier than the passage you quote that "[AI] was started 70 years ago", which must mean the workshop at Dartmouth College in 1956, where the term "Artificial Intelligence" was first introduced (by John McCarthy, perhaps more recognisable as the creator of Lisp to a programmer audience).
There's sure been many binarily-opposed "camps" in AI, like the symbolicists vs the connectionists, or the "scruffies" vs the "neats" and so on. While I recognise the "machine learning vs knowledge representation" as one of the classic dichotomies, I don't really think it's such an ancient and fundamental dichotomy as the interviewee makes it sound.
I wonder if the interviewee is mixing up the "ML vs KRR" distinction with a more subtle distinction between different forms of machine learning. I'm thinking of Alan Turing's original description of a "learning machine" from the classic 1950 Mind paper ("Computing machinery and intelligence", where he introduced the "imitation game"). Turing's learning machine would learn incrementally, from a small original knowledge base and from human instruction and from contact with the world, whereas today's machine learning tries to learn everything from scratch, in an end-to-end, no-human-in-the-loop, approach. This distinction, "incremental vs all-at-once learning" seems to fit the interviewee's description of the "two main theories" better.
There's a paper, "The child machine vs the world brain", by the Australian AI scientist Claud Sammut, that goes into some detail in this distinction, based on Turing's paper and later developments in data mining and big data:
https://www.semanticscholar.org/paper/The-Child-Machine-vs-t...
I recommend reading at least its introduction and then digging in to the references if you're interested in the history of AI in general and machine learning in particular and different ideas on those subjects that have been explored and abandoned over the years.
Warning: contains ancient lore.
[+] [-] bogomipz|6 years ago|reply
>"Pursuing a G.I., particularly with a long term view, was the central mission of open A.I.. And yeah, there was the traditional Silicon Valley talk of changing the world, but also this sense that if HDI was done wrong, it could have very scary consequences."
If the concern was truly avoiding AGI was done wrong which presumably includes its development being in the hands of a select few tech giants. Wouldn't it be better to simply wind the operation down rather take money one of those few tech giants leading in AI development then running a company with motives that are odds with each other?
Just off the top of my head doesn't it seem that Microsoft with its new billion dollar investment now stand to benefit from that first billion dollars of investment made to the non-profit OpenAI more so than anybody else?
[+] [-] undershirt|6 years ago|reply
The reporter was invited to do a piece on them, and while visiting had trouble reconciling their secrecy with their ethos of openness. She was not allowed to interact with the actual researchers where they were doing their work, and her lunch was pushed out of the building so she couldn’t overhear their all-hands meeting. (My take is that their openness extended to the curated fruits of research, but their process itself was guarded from any communication channel they couldn't control i.e. the reporter).
This seems related to the second part, where they discuss the pressures toward profit, from strings attached to corporate investments, which they suggest would be different under traditional long-term gov investments. And they talked about the paradox of adding a for-profit branch to a non-profit org, without resolution.
I’m a bit unsettled recently when listening to podcasts and stories like this that seem to end on a note of “shrug, capitalism, isn’t this an interesting problem?”. I’d be more encouraged to see folks talking about post–game-theoretic social structures that can categorically solve for these issues, that can allow us to transition out of capitalistic dynamics rather than trying to fight them in order to get work done. This seems to be the rallying call of the nebulous ideas behind “game~b”. Wondering if anyone here has been seeing that yet.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] zachware|6 years ago|reply
Maybe it’s just me but companies are not public benefit enterprises, even if structured in some way as a not for profit.
This wave of coverage and the dialog around it seems to come from the view that OpenAI somehow owes the world something. When in fact it only owes its stakeholders none of whom are reporters.
[+] [-] sytelus|6 years ago|reply
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
[+] [-] raws|6 years ago|reply
[+] [-] Akababa|6 years ago|reply
[+] [-] cududa|6 years ago|reply
[+] [-] sjg007|6 years ago|reply
[+] [-] solarkraft|6 years ago|reply