…but then you just have a graphics card, built to render graphics, that you could tap instead through traditional tooling that’s already widely known and which produces consistent output via local assets.
While the results of the experiment here are interesting from an academic standpoint, it’s the same issue as remote game streaming: the amount of time you have to process input from the player, render visuals and sound, and transmit it back to the player precludes remote rendering for all but the most latency-insensitive games and framerates. It’s publishers and IP owners trying to solve the problem of ownership (in that they don’t want anyone to own anything, ever) rather than tackling any actually important issues (such as inefficient rendering pipelines, improving asset compression and delivery methods, improving the sandboxing of game code, etc).
Trying to make AI render real-time visuals is the wrongest use of the technology.
Nah, vibe coding is the wrongest use of technology, this is way to go. Why? Because good rendering isn't the necessarily most physically accurate one. You might actually want non-realistic rendering, and (depending on the specific style) it might be hard to impossible to get the right one on traditional pipeline. E. g. take "cartoonish" look - toon shading is, frankly said, total crap, because the artists rely on explicitly non-physical geometry to provide visual clues. This is definitely the future. Render on a normal pipeline (maybe with no lighting model at all), then put it through a style transfer network.
I do not see how it replaces or substitutes network and authentication latency, especially in terms of a single player game in which neither is necessary.
stego-tech|6 months ago
While the results of the experiment here are interesting from an academic standpoint, it’s the same issue as remote game streaming: the amount of time you have to process input from the player, render visuals and sound, and transmit it back to the player precludes remote rendering for all but the most latency-insensitive games and framerates. It’s publishers and IP owners trying to solve the problem of ownership (in that they don’t want anyone to own anything, ever) rather than tackling any actually important issues (such as inefficient rendering pipelines, improving asset compression and delivery methods, improving the sandboxing of game code, etc).
Trying to make AI render real-time visuals is the wrongest use of the technology.
tliltocatl|6 months ago
johnisgood|6 months ago
echelon|6 months ago
https://madebyoll.in/posts/game_emulation_via_dnn/demo/
https://madebyoll.in/posts/game_emulation_via_dnn/
Hook world state up to a server and you have multiplayer.
2025 update:
https://madebyoll.in/posts/world_emulation_via_dnn/
https://madebyoll.in/posts/world_emulation_via_dnn/demo/
jschomay|6 months ago