> Despite being trained on more compute than GPT-3, AlphaGo Zero could only play Go, while GPT-3 could write essays, code, translate languages, and assist with countless other tasks. The main difference was training data.
This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?
The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.
Also, the main showcase of the 'zero' models was that they learnt with zero training data: the only input was interacting with the rules of the game (as opposed to learning to mimic human games), which seems to be the kind of approach the article is asking for.
I am quite happy that this post argues in favor of subject-matter expertise. Until recently I worked at a national lab. I had many people (both leadership and colleagues) tell me that they need fewer if any subject-matter experts like myself because ML/AI can handle a lot of those tasks now. To that effect, lab leadership was directing most of the hiring (both internal and external) towards ML/AI positions.
I obviously think that we still need subject-matter experts. This article argues correctly that the "data generation process" (or as I call it, experimentation and sampling) requires "deep expertise" to guide it properly past current "bottlenecks".
I have often phrased this to colleagues this way. We are reaching a point where you cannot just throw more data at a problem (especially arbitrary data). We have to think about what data we intentionally use to make models. With the right sampling of information, we may be able to make better models more cheaply and faster. But again, that requires knowledge about what data to include and how to come up with a representative sample with enough "resolution" to resolve all of the nuances that the problem calls for. Again, that means that subject-matter expertise does matter.
>Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?
Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.
Google Brain invented transformers. Granted, none of those people are still at Google. But it was a Google shop that made LLMs broadly useful. OpenAI just took it and ran with it, rushing it to market... acquiring data by any means necessary(!)
Agreed - AlphaGo/Zero's reinforcement learning breakthroughs were foundational for modern AI, establishing techniques like self-play and value networks that influenced transformer architecture development.
It's interesting to compare this to the new third generation benchmarks from ARC-AGI, which are essentially a big collection of seemingly original puzzle video games. Both Mechanize (OP) and ARC want AI to start solving more real-world, long-horizon tasks. Mechanize wants to get AI working directly on real software development, while ARC suggests a focus on much simpler IQ test-style tasks.
It's still too early but at some point we are going to start to see infra and frameworks designed to be easier for LLMs to use. Like a version of terraform intended for AI. Or an edition of the AWS api for LLMs.
> For example, to train an AI to fully assume the role of an infrastructure engineer, we need RL environments that comprehensively test what’s required to build and maintain robust systems.
losteric|6 months ago
This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?
The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.
rcxdude|6 months ago
9rx|6 months ago
It is even weirder when you remember that Google had already released Meena[1], which was trained on natural language...
[1] And BERT before it, but it is less like GPT.
atrettel|6 months ago
I obviously think that we still need subject-matter experts. This article argues correctly that the "data generation process" (or as I call it, experimentation and sampling) requires "deep expertise" to guide it properly past current "bottlenecks".
I have often phrased this to colleagues this way. We are reaching a point where you cannot just throw more data at a problem (especially arbitrary data). We have to think about what data we intentionally use to make models. With the right sampling of information, we may be able to make better models more cheaply and faster. But again, that requires knowledge about what data to include and how to come up with a representative sample with enough "resolution" to resolve all of the nuances that the problem calls for. Again, that means that subject-matter expertise does matter.
m463|6 months ago
It had a fascinating look into the future, and in this case one insight in particular.
It basically said that in the future, answers would be cheap and plentiful, and questions would be valuable.
With AI I think this will become more true every day.
Maybe AI can answer anything, but won't we still need people to ask the right questions?
https://en.wikipedia.org/wiki/The_Inevitable_(book)
9rx|6 months ago
The funny part is that it argues in favour of scientific expertise, but at the end it says they actually want to hire engineers instead.
I suppose scientists will tell you that has always been par for the course...
lawlessone|6 months ago
Hopefully nothing endangers people..
jrimbault|6 months ago
Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?
vonneumannstan|6 months ago
Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.
jimbo808|6 months ago
ethan_smith|6 months ago
msp26|6 months ago
phreeza|6 months ago
unknown|6 months ago
[deleted]
rob74|6 months ago
worthless-trash|6 months ago
Anyways, good time for society.
getnormality|6 months ago
Sevii|6 months ago
Animats|6 months ago
Is that actually true. Is the mini-industry of people looking at pictures and classifying them dead? Does Mechanical Turk still get much use?
BrenBarn|6 months ago
Or we could just, you know, not do that at all.