With how much NVidia is developing AI-workload accelerating hardware, I expect this will cost maybe few dozen dollars and train in few hours within next few years.
What I think will be interesting is when commodity hardware can run cheap inference from very capable, specialized models. Pretty sure it will spawn a new golden age of AI-powered desktop applications.
For example, video game space has already been trying to create AI-powered NPCs, world generation and story-telling (e.g. Inworld AI).
> For example, video game space has already been trying to create AI-powered NPCs, world generation and story-telling (e.g. Inworld AI).
This'll be a niche for a long, long time.
Games are generally carefully crafted to deliver a specific mechanical and/or narrative experience. A world populated by LLM/etc bots or content is one choice of what that experience might be, but it's not going to be a very satisfying one for many game designers -- especially given the current/near state of the technology. There will be games and experiments that explore it, for sure, but the vast majority of games just don't have any need for it.
Whilst I agree with the reservations of the other replies I think you were implying in the future and I'm sure the LLM's will be more trustworthy and up to the task at some point.
What I would really like to see now is all the new TTS models being used more widespread. There are still so many games that have text only output. My kid love Alba: A wildlife Adventure but the eldest still isn't quite ready to read all the text so I have to sit with them reading out all the lines.
If anyone has a way of applying universal mods / accessibility features to existing games I'd love to see someone solve this and happy to help with the TTS!
> I expect this will cost maybe few dozen dollars and train in few hours within next few years.
I wouldn't count on it. Nvidia's been cleaning up shop, but their best option for expanding right now is through parallelization (bigger clusters, basically). Now that Blackwell is on TSMC, Nvidia is alongside Apple waiting for new and denser nodes to upgrade to. A real "generational leap" in training cost is going to require some form of efficiency gain that we're not seeing right now. It's possible that Nvidia has something up their sleeve, but I'm not holding my breath.
> What I think will be interesting is when commodity hardware can run cheap inference from very capable, specialized models.
What's funny is, you basically already can. The problem is becoming integration, and in the case of video games, giving the AI a meaningful role to fill. With today's finest technology, you can enjoy an AI-generated roguelike that is nigh-incomprehensible: https://store.steampowered.com/app/1889620/AI_Roguelite/
As time goes on, I really think developers are just going to not use AI for video games. Maybe I'm missing the "minecraft moment" for procedurally-generated stories here, but the sort of constraints needed to tell a story of create an interactive experience don't exist within LLMs. It's a stochastic nightmare of potential softlocks, contradictions or outright offensive requests. The majority of places I've seen AI applied today isn't for content creation, but instead automated moderation.
> For example, video game space has already been trying to create AI-powered NPCs, world generation and story-telling (e.g. Inworld AI).
Current AI isn't even close to good enough for video game NPCs and related. We're several breakthroughs away from that being possible at any cost. Those breakthroughs might happen in 3 years, or they might not happen in 10. Hard to predict.
Probably not with the same amount of training time, but I'd imagine a recent MBP GPU could handle GPT-2 training. The biggest challenge is that the training would need to be reimplemented for Metal instead of CUDA.
[+] [-] tomalaci|1 year ago|reply
What I think will be interesting is when commodity hardware can run cheap inference from very capable, specialized models. Pretty sure it will spawn a new golden age of AI-powered desktop applications.
For example, video game space has already been trying to create AI-powered NPCs, world generation and story-telling (e.g. Inworld AI).
[+] [-] swatcoder|1 year ago|reply
This'll be a niche for a long, long time.
Games are generally carefully crafted to deliver a specific mechanical and/or narrative experience. A world populated by LLM/etc bots or content is one choice of what that experience might be, but it's not going to be a very satisfying one for many game designers -- especially given the current/near state of the technology. There will be games and experiments that explore it, for sure, but the vast majority of games just don't have any need for it.
[+] [-] up2isomorphism|1 year ago|reply
To me this is a downside compared to the NPC generated by humans, since that’s the only reason I would like to read them.
[+] [-] robbomacrae|1 year ago|reply
What I would really like to see now is all the new TTS models being used more widespread. There are still so many games that have text only output. My kid love Alba: A wildlife Adventure but the eldest still isn't quite ready to read all the text so I have to sit with them reading out all the lines.
If anyone has a way of applying universal mods / accessibility features to existing games I'd love to see someone solve this and happy to help with the TTS!
[+] [-] talldayo|1 year ago|reply
I wouldn't count on it. Nvidia's been cleaning up shop, but their best option for expanding right now is through parallelization (bigger clusters, basically). Now that Blackwell is on TSMC, Nvidia is alongside Apple waiting for new and denser nodes to upgrade to. A real "generational leap" in training cost is going to require some form of efficiency gain that we're not seeing right now. It's possible that Nvidia has something up their sleeve, but I'm not holding my breath.
> What I think will be interesting is when commodity hardware can run cheap inference from very capable, specialized models.
What's funny is, you basically already can. The problem is becoming integration, and in the case of video games, giving the AI a meaningful role to fill. With today's finest technology, you can enjoy an AI-generated roguelike that is nigh-incomprehensible: https://store.steampowered.com/app/1889620/AI_Roguelite/
As time goes on, I really think developers are just going to not use AI for video games. Maybe I'm missing the "minecraft moment" for procedurally-generated stories here, but the sort of constraints needed to tell a story of create an interactive experience don't exist within LLMs. It's a stochastic nightmare of potential softlocks, contradictions or outright offensive requests. The majority of places I've seen AI applied today isn't for content creation, but instead automated moderation.
[+] [-] forrestthewoods|1 year ago|reply
Current AI isn't even close to good enough for video game NPCs and related. We're several breakthroughs away from that being possible at any cost. Those breakthroughs might happen in 3 years, or they might not happen in 10. Hard to predict.
[+] [-] charlescurt123|1 year ago|reply
have a human created story and text as a guideline.
With that have genAI make the text per stage, you would get different statements every time and would stay on track.
Would be interesting to play a game where all players say the same information in slightly different ways every single playthrough.
[+] [-] HPsquared|1 year ago|reply
[+] [-] alecco|1 year ago|reply
[+] [-] iforiq|1 year ago|reply
[+] [-] alecco|1 year ago|reply
[+] [-] rurban|1 year ago|reply
[+] [-] jamestimmins|1 year ago|reply
[+] [-] michaelmior|1 year ago|reply
[+] [-] arthurcolle|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] unknown|1 year ago|reply
[deleted]