top | item 44426323

(no title)

EternalFury | 8 months ago

What John Carmack is exploring is pretty revealing. Train models to play 2D video games to a superhuman level, then ask them to play a level they have not seen before or another 2D video game they have not seen before. The transfer function is negative. So, in my definition, no intelligence has been developed, only expertise in a narrow set of tasks.

It’s apparently much easier to scare the masses with visions of ASI, than to build a general intelligence that can pick up a new 2D video game faster than a human being.

discuss

order

ozgrakkurt|8 months ago

Seeing comments here saying “this problem is already solved”, “he is just bad at this” etc. feels bad. He has given a long time to this problem by now. He is trying to solve this to advance the field. And needless to say, he is a legend in computer engineering or w/e you call it.

It should be required to point to the “solution” and maybe how it works to say “he just sucks” or “this was solved before”.

IMO the problem with current models is that they don’t learn categorically like: lions are animals, animals are alive. goats are animals, goats are alive too. So if lions have some property like breathing and goats also have it, it is likely that other similar things have the same property.

Or when playing a game, a human can come up with a strategy like: I’ll level this ability and lean on it for starting, then I’ll level this other ability that takes more time to ramp up while using the first one, then change to this play style after I have the new ability ready. This might be formulated completely based on theoretical ideas about the game, and modified as the player gets more experience.

With current AI models as far as I can understand, it will see the whole game as an optimization problem and try to find something at random that makes it win more. This is not as scalable as combining theory and experience in the way that humans do. For example a human is innately capable of understanding there is a concept of early game, and the gains made in early game can compound and generate a large lead. This is pattern matching as well but it is on a higher level .

Theory makes learning more scalable compared to just trying everything and seeing what works

93po|8 months ago

I'm a huge fan of Carmack and read the book (Masters of Doom) multiple times and love it, too. But he's a legend for pioneering PC gaming graphics in a way that was feasible for a single (very talented) person to accomplish, and was also pioneering something that already existed on consoles. I think there's a big leap from very cleverly recreating existing very basic and simple 3d graphics for a new platform versus the massive task that is AGI/ASI, which I don't think is something a single person can meaningfully move forward at this point. Even the big jump we got from GPTs was due to many many people.

motorest|8 months ago

> Seeing comments here saying “this problem is already solved”, “he is just bad at this” etc. feels bad. He has given a long time to this problem by now. He is trying to solve this to advance the field. And needless to say, he is a legend in computer engineering or w/e you call it.

This comment, with the exception of the random claim of "he is just bad at this", reads like a thinly veiled appeal to authority. I mean, you're complaining about people pointing out prior work, reviewing the approach, and benchmarking the output.

I'm not sure you are aware, but those items (bibliographical review, problem statement, proposal, comparison/benchmarks) are the very basic structure of an academic paper, which each and every single academic paper on any technical subject are required to present in order to be publishable.

I get that there must be a positive feedback element to it, but pay attention to your own claim: "He is trying to solve this to advance the field." How can you tell whether this really advances the field if you want to shield it from any review or comparison? Otherwise what's the point? To go on and claim that ${RANDOM_CELEB} parachuted into a field and succeeded at first try where all so-called researchers and experts failed?

Lastly, "he is just bad at this". You know who is bad at research topics? Researchers specialized on said topic. Their job is to literally figure out something they don't know. Why do you think someone who just started is any different?

vladimirralev|8 months ago

He is not using appropriate models for this conclusion and neither is he using state of the art models in this research and moreover he doesn't have an expensive foundational model to build upon for 2d games. It's just a fun project.

A serious attempt at video/vision would involve some probabilistic latent space that can be noised in ways that make sense for games in general. I think veo3 proves that ai can generalize 2d and even 3d games, generating a video under prompt constraints is basically playing a game. I think you could prompt veo3 to play any game for a few seconds and it will generally make sense even though it is not fine tuned.

sigmoid10|8 months ago

Veo3's world model is still pretty limited. That becomes obvious very fast once you prompt out of distribution video content (i.e. stuff that you are unlikely to find on youtube). It's extremely good at creating photorealistic surfaces and lighting. It even has some reasonably solid understanding of fluid dynamics for simulating water. But for complex human behaviour (in particular certain motions) it simply lacks the training data. Although that's not really a fault of the model and I'm pretty sure there will be a way to overcome this as well. Maybe some kind of physics based simulation as supplement training data.

altairprime|8 months ago

Is any model currently known to succeed in the scenario that Carmack’s inappropriate model failed?

Intralexical|8 months ago

> I think veo3 proves that ai can generalize 2d and even 3d games, generating a video under prompt constraints is basically playing a game.

In the same way that keeping a dream journal is basically doing investigative journalism, or talking to yourself is equivalent to making new friends, maybe.

The difference is that while they may both produce similar, "plausible" output, one does so as a result of processes that exist in relation to an external reality.

troupo|8 months ago

> I think veo3 proves that ai can generalize 2d and even 3d games

It doesn't. And you said it yourself:

> generating a video under prompt constraints is basically playing a game.

No. It's neither generating a game (that people can play) nor is it playing a game (it's generating a video).

Since it's not a model of the world in any sense of the word, there are issues with even the most basic object permanenece. E.g. here's veo3 generating a GTA-style video. Oh look, the car spins 360 and ends up on a completely different street than the one it was driving down previously: https://www.youtube.com/watch?v=ja2PVllZcsI

keerthiko|8 months ago

> generating a video under prompt constraints is basically playing a game

Besides static puzzles (like a maze or jigsaw) I don't believe this analogy holds? A model working with prompt constraints that aren't evolving or being added over the course of "navigating" the generation of the model's output means it needs to process 0 new information that it didn't come up with itself — playing a game is different from other generation because it's primarily about reacting to input you didn't know the precise timing/spatial details of, but can learn that they come within a known set of higher order rules. Obviously the more finite/deterministic/predictably probabilistic the video game's solution space, the more it can be inferred from the initial state, aka reduce to the same type of problem as generating a video from a prompt), which is why models are still able to play video games. But as GP pointed out, transfer function negative in such cases — the overarching rules are not predictable enough across disparate genres.

> I think you could prompt veo3 to play any game for a few seconds

I'm curious what your threshold for what constitutes "play any game" is in this claim? If I wrote a script that maps button combinations to average pixel color of a portion of the screen buffer, by what metric(s) would veo3 be "playing" the game more or better than that script "for a few seconds"?

edit: removing knee-jerk reaction language

pshc|8 months ago

I think we need a spatial/physics model handling movement and tactics watched over by a high level strategy model (maybe an LLM).

IIAOPSW|8 months ago

There's something fascinating about this, because the human ability to "transfer knowledge" (eg pick up some other never before seen video game and quickly understand it) isn't really that general. There's a very particular "overtone window" of the sort of degrees of difference where it is possible.

If I were to hand you a version of a 2d platformer (lets say Mario) where the gimmick is that you're actually playing the fourier transform of the normal game, it would be hopeless. You might not ever catch on that the images on screen are completely isomorphic to a game you're quite familiar with and possibly even good at.

But some range of spatial transform gimmicks are cleanly intuitive. We've seen this with games like vvvvvv and braid.

So the general rule seems to be that intelligence is transferable to situations that are isomorphic up to certain "natural" transforms, but not to "matching any possible embedding of the same game in a different representation".

Our failure to produce anything more than hyper-specialists forces us to question exactly is meant by the ability to generalize other than just "mimicking an ability humans seem to have".

Certhas|8 months ago

When studying physics, people eventually learn about Fourier transform, and they learn about quantum mechanics, where the Fourier transform switches between describing things in terms of position and of momentum. And amazingly the harmonic oscillator is the same in position and momentum space! So maybe there are other creatures that perceive in momentum space! Everything is relative!

Except that's of course superficial nonsense. Position space isn't an accident of evolution, one of many possible encodings of spatial data. It's an extremely special encoding: The physical laws are local in position and space. What happens on the moon does not impact what happens when I eat breakfast much. But points arbitrarily far in momentum space do interact. Locality of action is a very very deep physical principle, and it's absolutely central to our ability to reason about the world at all. To break it apart into independent pieces.

So I strongly reject your example. It makes no sense to present the pictures of a video game in Fourier space. Its highly unnatural for very profound reasons. Our difficulty stems entirely from the fact that our vision system is built for interpreting a world with local rules and laws.

I also see no reason that an AI could successfully transfer between the two representations easily. If you start from scratch it could train on the Fourier space data, but that's more akin to using different eyes, rather than transfer.

chongli|8 months ago

One of my favourite examples of games that are hard to train an AI on is The Legend of Zelda for NES. Many other games of the NES era have (at least in the short term) a goal function which almost perfectly corresponds to some simple memory value such as score or x-position.

Not Zelda. That game is highly nonlinear and its measurable goals (triforce pieces) are long-term objectives that take a lot of gameplay to obtain. As far as I’m aware, no AI has been able to make even modest progress without any prior knowledge of the game itself.

Yet many humans can successfully play and complete the first dungeon without any outside help. While completing the full game is a challenge that takes dedication, many people achieved it long before having access to the internet and its spoiler resources.

So why is this? Why are humans so much better at Zelda than AIs? I believe that transfer knowledge has a lot to do with it. For starters, Link is approximately human (technically Hylian, but they are considered a race of humans, not a separate species) which means his method of sensing and interacting with his world will be instantly familiar to humans. He’s not at all like an earthworm or an insect in that regard.

Secondly, many of the objects Link interacts with are familiar to most modern humans today: swords, shields, keys, arrows, money, bombs, boomerangs, a ladder, a raft, a letter, a bottle of medicine, etc. Since these objects in-game have real world analogues, players will already understand their function without having to figure it out. Even the triforce itself functions similarly to a jigsaw puzzle, making it obvious what the player’s final objective should be. Furthermore, many players would be familiar with the tropes of heroic myths from many cultures which the Zelda plot closely adheres to (undertake a quest of personal growth, defeat the nemesis, rescue the princess).

All of this cultural knowledge is something we take for granted when we sit down to play Zelda for the first time. We’re able to transfer it to the game without any effort whatsoever, something I have yet to witness an AI achieve (train an AI on a general cultural corpus containing all of the background cultural information above and get it to transfer that knowledge into gameplay as effectively as an unspoiled Zelda beginner).

As for the Fourier transform, I don’t know. I do know that the Legend of Zelda has been successfully completed while playing entirely blindfolded. Of course, this wasn’t with Fourier transformed sound, though since the blindfolded run relies on sound cues I imagine a player could adjust to the Fourier transformed sound effects.

YokoZar|8 months ago

I wonder if this is a case of overfitting from allowing the model to grow too large, and if you might cajole it into learning more generic heuristics by putting some constraints on it.

It sounds like the "best" AI without constraint would just be something like a replay of a record speedrun rather than a smaller set of heuristics of getting through a game, though the latter is clearly much more important with unseen content.

justanotherjoe|8 months ago

I don't get why people are so invested in framing it this way. I'm sure there are ways to do the stated objective. John Carmack isn't even an AI guy why is he suddenly the standard.

GuB-42|8 months ago

Who is an "AI guy"? The field as we know it is fairly new. Sure, neural nets are old hat, but a lot has happened in the last few years.

John Carmack founded Keen technology in 2022 and has been working seriously on AI since 2019. From his experience in the video game industry, he knows a thing or two about linear algebra and GPUs, that is the underlying maths and the underlying hardware.

So, for all intent and purposes, he is an "AI guy" now.

qaq|8 months ago

Keen includes researchers like Richard Sutton, Joseph Modayil etc. Also John has being doing it full time for almost 5 years now so given his background and aptitude for learning I would imaging by this time he is more of an AI guy then a fairly large percentage of AI PhDs.

varjag|8 months ago

What in your opinion constitutes an AI guy?

refulgentis|8 months ago

Names >> all, and increasingly so.

One phenomena that bared this to me, in a substantive way, was noticing an increasing # of reverent comments re: Geohot in odd places here, that are just as quickly replied to by people with a sense of how he works, as opposed to the keywords he associates himself with. But that only happens here AFAIK.

Yapping, or, inducing people to yap about me, unfortunately, is much more salient to my expected mindshare than the work I do.

It's getting claustrophobic intellectually, as a result.

Example from the last week is the phrase "context engineering" - Shopify CEO says he likes it better than prompt engineering, Karpathy QTs to affirm, SimonW writes it up as fait accompli. Now I have to rework my site to not use "prompt engineering" and have a Take™ on "context engineering". Because of a couple tweets + a blog reverberating over 2-3 days.

Nothing against Carmack, or anyone else named, at all. i.e. in the context engineering case, they're just sharing their thoughts in realtime. (i.e. I don't wanna get rolled up into a downvote brigade because it seems like I'm affirming the loose assertion Carmack is "not an AI guy", or, that it seems I'm criticizing anyone's conduct at all)

EDIT: The context engineering example was not in reference to another post at the time of writing, now one is the top of front page.

energy123|8 months ago

Credentialism is bad, especially when used as a stick

m_rpn|8 months ago

Maybe cause he's like top 5 most influential computer programmers of all time and knew to be a super human workaholic?

raincole|8 months ago

Because it "confirms" what they already believe in.

surecoocoocoo|8 months ago

Ah some No True Scotsman

Not sure why justanotherjoe is a credible resource on who is and isn’t expert in some new dialectic and euphemism for machine state management. You’re that nobody to me :shrug:

Yann LeCun is an AI guy and has simplified it as “not much more than physical statistics.”

WWhole lot of AI is decades old info theory books applied to modern computer.

Either a mem value is or isn’t what’s expected. Either an entire matrix of values is or isn’t what’s expected. Store the results of some such rules. There’s your model.

The words are made up and arbitrary because human existence is arbitrary. You’re being sold on a bridge to nowhere.

Uehreka|8 months ago

These questions of whether the model is “really intelligent” or whatever might be of interest to academics theorizing about AGI, but to the vast swaths of people getting useful stuff out of LLMs, it doesn’t really matter. We don’t care if the current path leads to AGI. If the line stopped at Claude 4 I’d still keep using it.

And like I get it, it’s fun to complain about the obnoxious and irrational AGI people. But the discussion about how people are using these things in their everyday lives is way more interesting.

ferguess_k|8 months ago

Can you please explain "the transfer function is negative"?

I'm wondering whether one has tested with the same model but on two situations:

1) Bring it to superhuman level in game A and then present game B, which is similar to A, to it.

2) Present B to it without presenting A.

If 1) is not significantly better than 2) then maybe it is not carrying much "knowledge", or maybe we simply did not program it correctly.

tough|8 months ago

I think the problem is we train models to pattern match, not to learn or reason about world models

Zanfa|8 months ago

According to Carmack's recent talk [0], SOTA models that have been trained on game A don't perform better or train faster on game B. Even worse, training on game B negatively affects performance in game A when returning to it.

[0] https://www.youtube.com/watch?v=3pdlTMdo7pY

goatlover|8 months ago

I've wondered about the claim that the models played those Atari/2D video games at superhuman levels, because I clearly recall some humans achieving superhuman levels before models were capable of it. Must have been superhuman compared to average human player, not someone who spent an inordinate amount of time mastering the game.

raincole|8 months ago

I'm not sure why you think so. AI outperforms humans in many games already. Basically all the games we care to put money to train a model.

AI has beat the best human players in Chess, Go, Mahjong, Texas hold'em, Dota, Starcraft, etc. It would be really, really surprising that some Atari game is the holy grail of human performance that AI cannot beat.

bob1029|8 months ago

Generalization across tasks is clearly still elusive. The only reason we see such success with modern LLMs is because of the heroic amount of parameters used. When you are probing into a space of a billion samples, you will come back with something plausible every time.

The only thing I've seen approximating generalization has appeared in symbolic AI cases with genetic programming. It's arguably dumb luck of the mutation operator, but oftentimes a solution is found that does work for the general case - and it is possible to prove a general solution was found with a symbolic approach.

hluska|8 months ago

When I finished my degree, the idea that a software system could develop that level of expertise was relegated to science fiction. It is an unbelievable human accomplishment to get to that point and honestly, a bit of awe makes life more pleasant.

Less quality of life focused, I don’t believe that the models he uses for this research are capable of more. Is it really that revealing?

moralestapia|8 months ago

I wonder how much performance decreases if they just use slightly modified versions of the same game. Like a different color scheme, or a couple different sprites.

fullshark|8 months ago

Just sounds like an example of overfitting. This is all machine learning at its root.

TimByte|8 months ago

The gap between hype and actual generalization is still massive

SquibblesRedux|8 months ago

Indeed, it's nothing but function fitting.

t55|8 months ago

this is what deepmind did 10 years ago lol

smokel|8 months ago

No, they (and many others before them) are genuinely trying to improve on the original research.

The original paper "Playing Atari with Deep Reinforcement Learning" (2013) from Deepmind describes how agents can play Atari games, but these agents would have to be specifically trained on every individual game using millions of frames. To accomplish this, simulators were run in parallel, and much faster than in real-time.

Also, additional trickery was added to extract a reward signal from the games, and there is some minor cheating on supplying inputs.

What Carmack (and others before him) is interested in, is trying to learn in a real-life setting, similar to how humans learn.