I’ve always found it crazy that my LLM has access to such terrible tools compared to mine.
It’s left with grepping for function signatures, sending diffs for patching, and running `cat` to read all the code at once.
I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc.
Is anyone working on making it so LLM’s get better tools for actually writing/refactoring code? Or is there some “bitter lesson”-like thing that says effort is always better spent just increasing the context size and slurping up all the code at once?
> Claude Code officially added native support for the Language Server Protocol (LSP) in version 2.0.74, released in December 2025.
I think from training it's still biased towards simple tooling.
But also, there is real power to simple tools, a small set of general purpose tools beats a bunch of narrow specific use case tools. It's easier for humans to use high level tools, but for LLM's they can instantly compose the low level tools for their use case and learn to generalize, it's like writing insane perl one liners is second nature for them compared to us.
If you watch the tool calls you'll see they write a ton of one off small python programs to test, validate explore, etc...
If you think about it any time you use a tool there is probably a 20 line python program that is more fit to your use case, it's just that it would take you too long to write it, but for an LLM that's 0.5 seconds
> I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc
I am so surprised that all of the AI tooling mostly revolves around VSC or its forks and that JetBrains seem to not really have done anything revolutionary in the space.
With how good their refactoring and code inspection tools are, you’d really think they’d pass of that context information to AI models and that they’d be leaps and bounds ahead.
LLMs aren't like you or me. They can comprehend large quantities of code quickly and piece things together easily from scattered fragments. so go to reference etc become much less important. Of course though things change as the number of usages of a symbol becomes large but in most cases the LLM can just make perfect sense of things via grep.
To provide it access to refactoring as a tool also risks confusing it via too many tools.
It's the same reason that waffling for a few minutes via speech to text with tangents and corrections and chaos is just about as good as a carefully written prompt for coding agents.
JetBrain IDEs come with an MCP server that supports some refactoring tools [1]:
> Starting with version 2025.2, IntelliJ IDEA comes with an integrated MCP server, allowing external clients such as Claude Desktop, Cursor, Codex, VS Code, and others to access tools provided by the IDE. This provides users with the ability to control and interact with JetBrains IDEs without leaving their application of choice.
Tidewave.ai does exactly that. It’s made Claude code so much more functional. It provides mcp servers to
- search all your code efficiently
- search all documentation for libraries
- access your database and get real data samples (not just abstract data types)
- allows you to select design components from your figma project and implements them for you
- allows Claude to see what is rendered in the browser
It’s basically the ide for your LLM client. It really closes the loop and has made Claude and myself so much more productive.
Highly recommended and cheap at $10/month
Ps: my personal opinion. I have Zero affiliation with them
LLMs operate on text. They can take in text, and they can produce text. Yes, some LLMs can also read and even produce images, but at least as of today, they are clearly much better at using text[1].
So cat, ripgrep, etc are the right tools for them. They need a command line, not a GUI.
1: Maybe you'd argue that Nano Banana is pretty good. But would you say its prompt adherence is good enough to produce, say, a working Scratch program?
You can give agents the ability to check VSCode Diagnostics, LSP servers and the like.
But they constantly ignore them and use their base CLI tools instead, it drives me batty. No matter what I put in AGENTS.md or similar, they always just ignore the more advanced tooling IME.
LSP also kind of sucks. But the problem is all the big companies want big valuations, so they only chase generic solutions. That's why everything is a VS Code clone, etc..
Not coding agents but we do a lot of work trying to find the best tools, and the result is always that the simplest possible general tool that can get the job done always beats a suite of complicated tools and rules on how to use them.
This isn’t completely the answer to what you want but skills do open a lot of doors here. Anything you can do on a command line can turn into a skill, after all.
Did you eval using screenshots or some sort of rendered visualization instead of the CLI? I wonder if Claude has better visual intelligence when viewing images (lots of these in its training set) rather than ascii schematics (probably very few of these in the corpus).
> As a mirror to real-world agent design: the limiting factor for general-purpose agents is the legibility of their environments, and the strength of their interfaces. For this reason, we prefer to think of agents as automating diligence, rather than intelligence, for operational challenges.
> The only other notable setback was an accidental use of the word "revert" which Codex took literally, and ran git revert on a file where 1-2 hours of progress had been accumulating.
If I tell Claude to "revert that last change, it isn't right, try this instead" and Claude hasn't committed recently it will happily `git checkout ...` and blow away all recent changes instead of reverting the "last change".
(Which, it's not wrong or anything -- I did say "revert that change" -- it's just annoying. And telling `CLAUDE.md` to commit more often doesn't work consistently, because Claude is a dummy sometimes).
Amazing that these tools don't maintain a replayable log of everything they've done.
Although git revert is not a destructive operation, so it's surprising that it caused any loss of data. Maybe they meant git reset --hard or something like that. Wild if Codec would run that.
I love the interview at the end of the video. The kubectl-inspired CLI, and the feedback for improvements from Claude, as well as the alerts/segmentation feedback.
You could take those, make the tools better, and repeat the experience, and I'd love to see how much better the run would go.
I keep thinking about that when it comes to things like this - the Pokemon thing as well. The quality of the tooling around the AI is only going to become more and more impactful as time goes on. The more you can deterministically figure out on behalf of the AI to provide it with accurate ways of seeing and doing things, the better.
Ditto for humans, of course, that's the great thing about optimizing for AI. It's really just "if a human was using this, what would they need"? Think about it: The whole thing with the paths not being properly connected, a human would have to sit down and really think about it, draw/sketch the layout to visualize and understand what coordinates to do things in. And if you couldn't do that, you too would probably struggle for a while. But if the tool provided you with enough context to understand that a path wasn't connected properly and why, you'd be fine.
I see this sentiment of using AI to improve itself a lot but it never seems to work well in practice. At best you end up with a very verbose context that covers all the random edge cases encountered during tasks.
For this to work the way people expect you’d need to somehow feed this info back into fine tuning rather than just appending to context. Otherwise the model never actually “learns”, you’re just applying heavy handed fudge factors to existing weights through context.
I would’ve walked for days to a CompUSA and spent my life savings if there was anything remotely equivalent to this when I was learning C on my Macintosh 4400 in 1997
First time I am seeing realistic timelines from a vibe-coded project. Usually everyone who vibe codes just says they did in few hours, no matter the project.
Interesting article but it doesn’t actually discuss how well it performs at playing the game. There is in fact a 1.5 hour YouTube video but it woulda been nice for a bit of an outcome postmortem. It’s like “here’s the methods and set up section of a research paper but for the conclusion you need to watch this movie and make your own judgements!”
It does discuss that? Basically it has good grasp of finances and often knows what "should" be done, but it struggles with actually building anything beyond placing toilets and hotdog stalls. To be fair, its map interface is not exactly optimal, and a multimodal model might fare quite a bit better at understanding the 2D map (verticality would likely still be a problem).
Yes you can literally just ask Claude Code to create a status line showing context usage. I had it make this colored progress bar of context usage, changing thru green, yellow, orange, red as context fills up. Instructions to install:
> Your outlook above is too self critical. This is the first time an AI has beaten this park much less played a full game of RollerCoaster Tycoon through a TUI. There are important learnings for B2B SaaS. This isn't LinkedIn (it is, in fact, LinkedIn). But seriously. What can we learn here.
I corroborate that spatial reasoning is a challenge still. In this case, it's the complexity of the game world, but anyone who has used Codex/Claude with complex UIs in CSS or a native UI library will recognize the shortcomings fairly quickly.
I've done this! Given the right interface I was surprised at how well it did. Prompted it "You're controlling a character in Old School RuneScape, come up with a goal for yourself, and don't stop working on it until you've achieved it". It decided to fish for and cook 100 lobsters, and it did it pretty much flawlessly!
Biggest downside was it's inability to see (literally), getting lists of interact-able game objects, NPCs, etc was fine when it decided to do something that didn't require any real-time input. Sailing, or anything that required it to react to what's on screen was pretty much impossible without more tooling to manage the reacting part for it (e.g. tool to navigate automatically to some location).
People have been botting on Runescape since the early 2000s. Obviously not quite at the Claude level :). The botting forums were a group of very active and welcoming communities. This is actually what led me to Java programming and computer science more broadly--I wrote custom scripts for my characters.
I still have some parts of the old Rei-net forum archived on an external somewhere.
Given dwarf fortress has an ASCII interface it may actually be a lot easier to set up claude to work with it. Also, a lot of the challenges of dwarf fortress is just knowing all the different mechanics and how they work which is something claude should be good at.
This was an interesting application of AI, but I don't really think this is what LLMs excel at. Correct me if I'm wrong.
It was interesting that the poster vibe-coded (I'm assuming) the CTL from scratch; Claude was probably pretty good at doing that, and that task could likely have been completed in an afternoon.
Pairing the CTL with the CLI makes sense, as that's the only way to gain feedback from the game. Claude can't easily do spatial recognition (yet).
A project like this would entirely depend on the game being open source. I've seen some very impressive applications of AI online with closed-source games and entire algorithms dedicated to visual reasoning.
Was able to have AI learn to play Mario Kart nearly perfectly. I find his work to be very impressive.
I guess because RCT2 is more data-driven than visually challenging, this solution works well, but having an LLM try to play a racing game sounds like it would be disastrous.
Not sure if you clocked this, but the Mario Kart AI is not an LLM. It's a randomized neural net that was trained with reinforcement learning. Apologies if I misread.
While this seems cool at first, it does not demonstrate superiority over a true custom built AI for rollercoaster tycoon.
It is a curiosity, good for headlines, but the takeaway is if you really need an actual good AI, you are still better off not using an LLM powered solution.
This is a cool idea. I wanted to do something like this by adding a Lua API to OpenRCT2 that allows you to manipulate and inspect the game world. Then, you could either provide an LLM agent the ability to write and run scripts in the game, or program a more classic AI using the Lua API. This AI would probably perform much better than an LLM - but an interesting experiment nonetheless to see how a language model can fare in a task it was not trained to do.
The opening paragraph I thought was the agent prompt haha
> The park rating is climbing. Your flagship coaster is printing money. Guests are happy, for now. But you know what's coming: the inevitable cascade of breakdowns, the trash piling up by the exits, the queue times spiraling out of control.
It's been several times that I see ASCII being used initially for these kinds of problems. I think it's because its counter-intuitive, in the sense that for us humans ASCII is text but we tend to forget spacial awareness.
I find this very interesting of us humans interacting with AIs.
Surely it must have digested plenty of walkthroughs for any game?
A linear puzzle game like that I would just expect the ai to fly through first time, considering it has probably read 30 years of guides and walkthroughs.
I’ve been doing game development and it starts to hallucinate more rapidly when it doesn’t understand things like the direction it placing things or which way the camera is oriented
Gemini models are a little bit better about spatial reasoning, but we’re still not there yet because these models were not designed to do spatial reasoning they were designed to process text
In my development, I also use the ascii matrix technique.
Dota 2 is a real time strategy game with an arguably more complex micro game (but a far simpler macro game than AoE2, but that's far easier for an AI to master), and OpenAI Five completely destroyed the reigning champions. In 2019. Perfect coordination between units, superhuman mechanical skill, perfect consistency.
I see no reason why AoE2 would be any different.
Worth noting that openAI Five was mostly deep reinforcement learning and massive distributed training, it didn't use image to text and an LLM for reasoning about what it sees to make its "decisions". But that wouldn't be a good way to do an AI like that anyway.
Oh, and humans still play Dota. It's still a highly competitive community. So that wasn't destroyed at all, most teams now use AI to study tactics and strategy.
I suspect the fun is playing against real people and the unexpected things they do. Just because the AI can beat you does not necessarily make it fun. People still play chess despite stock fish existing.
> also completely unfazed by the premise that it has been 'hacked into' a late-90's computer game. This was surprising, but fits with Claude's playful personality and flexible disposition.
When I read things like this, I wonder if it's just me not understanding this brave new world, or half of AI developers are delusional and really believe that they are dealing with a sentient being.
Yes I believe so. Also things like forcing a "key insight" summary after the excels vs struggles section.
I would take any descriptions like "comprehensive", "sophisticated" etc with a massive grain of salt. But the nuts and bolts of how it was done should be accurate.
Honestly i thought the AI would do better then what is described. RCT is pretty simple when it comes to things like what to set ride price to. I think the game has a straightforward formula for how guests respond to prices.
Interesting this is on the ramp.com domain? I'm surprised in this tech market they can pay devs to hack on Rollercoaster Tycoon. Maybe there's some crossover I'm missing but seems like a sweet gig honestly.
Crusader Kings is a franchise I really could see LLMs shine. One of the current main criticisms on the game is that there's a lack of events, and that they often don't really feel relevant to your character.
An LLM could potentially make events far more aimed at your character, and could actually respond to things happening in the world far more than what the game currently does. It could really create some cool emerging gameplay.
I actually think it would be pretty fun to code something to play video games for me, it has a lot of overlap with robotics. Separately, I learned about assembly from cheat engine when I was a kid.
That’s not the point of this. This was an exercise to measure the strengths and weaknesses of current LLMs in operating a company and managing operations, and the video game was just the simulation engine.
You do you. I find this exceedingly cool and I think it's a fun new thing to do.
It's kind of like how people started watching Let's Plays and that turned into Twitch.
One of the coolest things recently is VTubers in mocap suits using AI performers to do single person improv performances with. It's wild and cool as hell. A single performer creating a vast fantasy world full of characters.
LLMs and agents playing Pokemon and StarCraft? Also a ton of fun.
ninkendo|1 month ago
I’ve always found it crazy that my LLM has access to such terrible tools compared to mine.
It’s left with grepping for function signatures, sending diffs for patching, and running `cat` to read all the code at once.
I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc.
Is anyone working on making it so LLM’s get better tools for actually writing/refactoring code? Or is there some “bitter lesson”-like thing that says effort is always better spent just increasing the context size and slurping up all the code at once?
nbardy|1 month ago
I think from training it's still biased towards simple tooling.
But also, there is real power to simple tools, a small set of general purpose tools beats a bunch of narrow specific use case tools. It's easier for humans to use high level tools, but for LLM's they can instantly compose the low level tools for their use case and learn to generalize, it's like writing insane perl one liners is second nature for them compared to us.
If you watch the tool calls you'll see they write a ton of one off small python programs to test, validate explore, etc...
If you think about it any time you use a tool there is probably a 20 line python program that is more fit to your use case, it's just that it would take you too long to write it, but for an LLM that's 0.5 seconds
KronisLV|1 month ago
I am so surprised that all of the AI tooling mostly revolves around VSC or its forks and that JetBrains seem to not really have done anything revolutionary in the space.
With how good their refactoring and code inspection tools are, you’d really think they’d pass of that context information to AI models and that they’d be leaps and bounds ahead.
mulmboy|1 month ago
To provide it access to refactoring as a tool also risks confusing it via too many tools.
It's the same reason that waffling for a few minutes via speech to text with tangents and corrections and chaos is just about as good as a carefully written prompt for coding agents.
fragmede|1 month ago
> Added LSP (Language Server Protocol) tool for code intelligence features like go-to-definition, find references, and hover documentation
https://github.com/anthropics/claude-code/blob/main/CHANGELO...
hippo22|1 month ago
fancy_pantser|1 month ago
selcuka|1 month ago
> Starting with version 2025.2, IntelliJ IDEA comes with an integrated MCP server, allowing external clients such as Claude Desktop, Cursor, Codex, VS Code, and others to access tools provided by the IDE. This provides users with the ability to control and interact with JetBrains IDEs without leaving their application of choice.
[1] https://www.jetbrains.com/help/idea/mcp-server.html#supporte...
ricw|1 month ago
- search all your code efficiently - search all documentation for libraries - access your database and get real data samples (not just abstract data types) - allows you to select design components from your figma project and implements them for you - allows Claude to see what is rendered in the browser
It’s basically the ide for your LLM client. It really closes the loop and has made Claude and myself so much more productive. Highly recommended and cheap at $10/month
Ps: my personal opinion. I have Zero affiliation with them
Wowfunhappy|1 month ago
So cat, ripgrep, etc are the right tools for them. They need a command line, not a GUI.
1: Maybe you'd argue that Nano Banana is pretty good. But would you say its prompt adherence is good enough to produce, say, a working Scratch program?
JimDabell|1 month ago
https://github.com/cased/kit
girvo|1 month ago
But they constantly ignore them and use their base CLI tools instead, it drives me batty. No matter what I put in AGENTS.md or similar, they always just ignore the more advanced tooling IME.
hahahahhaah|1 month ago
rudedogg|1 month ago
https://paulgraham.com/ds.html
ramraj07|1 month ago
elif|1 month ago
BryantD|1 month ago
karlgkk|1 month ago
I think about it, to get these tools to be most effective you have to be able to page things in and out of their context windows.
What was once a couple of queries is now gonna be dozens or hundreds or even more from the LLM
For code that means querying the AST and query it in a way that allows you to limit the results of the output
I wonder which SAST vendor Anthropic will buy.
throwawaygo|1 month ago
Jaysobel|1 month ago
Session transcript using Simon Willison's claude-code-transcripts
https://htmlpreview.github.io/?https://gist.githubuserconten...
Reddit post
https://www.reddit.com/r/ClaudeAI/comments/1q9fen5/claude_co...
OpenRCT2!!
https://github.com/jaysobel/OpenRCT2
Project repo
https://github.com/jaysobel/OpenRCT2
theptip|1 month ago
cheschire|1 month ago
fragmede|1 month ago
How hard would it be to use with OpenAI's offerings instead? Particularly, imo, OpenAI's better at "looking" at pictures than Claude.
rashidae|1 month ago
hk__2|1 month ago
qaboutthat|1 month ago
(Which, it's not wrong or anything -- I did say "revert that change" -- it's just annoying. And telling `CLAUDE.md` to commit more often doesn't work consistently, because Claude is a dummy sometimes).
_flux|1 month ago
Although git revert is not a destructive operation, so it's surprising that it caused any loss of data. Maybe they meant git reset --hard or something like that. Wild if Codec would run that.
unknown|1 month ago
[deleted]
alt227|1 month ago
esafak|1 month ago
Filligree|1 month ago
pocketarc|1 month ago
You could take those, make the tools better, and repeat the experience, and I'd love to see how much better the run would go.
I keep thinking about that when it comes to things like this - the Pokemon thing as well. The quality of the tooling around the AI is only going to become more and more impactful as time goes on. The more you can deterministically figure out on behalf of the AI to provide it with accurate ways of seeing and doing things, the better.
Ditto for humans, of course, that's the great thing about optimizing for AI. It's really just "if a human was using this, what would they need"? Think about it: The whole thing with the paths not being properly connected, a human would have to sit down and really think about it, draw/sketch the layout to visualize and understand what coordinates to do things in. And if you couldn't do that, you too would probably struggle for a while. But if the tool provided you with enough context to understand that a path wasn't connected properly and why, you'd be fine.
wonnage|1 month ago
For this to work the way people expect you’d need to somehow feed this info back into fine tuning rather than just appending to context. Otherwise the model never actually “learns”, you’re just applying heavy handed fudge factors to existing weights through context.
lukebechtel|1 month ago
what a world!
AndrewKemendo|1 month ago
People don’t appreciate what they have
yoyohello13|1 month ago
falloutx|1 month ago
fnordpiglet|1 month ago
Sharlin|1 month ago
cyanydeez|1 month ago
nipponese|1 month ago
Maybe this is obvious to Claude users but how do you know your remaining context level? There is UI for this?
adithyareddy|1 month ago
d4rkp4ttern|1 month ago
https://github.com/pchalasani/claude-code-tools?tab=readme-o...
neilfrndes|1 month ago
MattGaiser|1 month ago
margorczynski|1 month ago
1) The map is a grid
2) Turn based
maxall4|1 month ago
What is this? A LinkedIn post?
mcintyre1994|1 month ago
From the transcript: https://htmlpreview.github.io/?https://gist.githubuserconten... :)
haunter|1 month ago
TaupeRanger|1 month ago
khoury|1 month ago
itsgrimetime|1 month ago
Biggest downside was it's inability to see (literally), getting lists of interact-able game objects, NPCs, etc was fine when it decided to do something that didn't require any real-time input. Sailing, or anything that required it to react to what's on screen was pretty much impossible without more tooling to manage the reacting part for it (e.g. tool to navigate automatically to some location).
reactordev|1 month ago
https://ubos.tech/mcp/runescape-mcp-server-rs-osrs/
ASpring|1 month ago
I still have some parts of the old Rei-net forum archived on an external somewhere.
ideashower|1 month ago
phreeza|1 month ago
rsanek|1 month ago
__turbobrew__|1 month ago
sodafountan|1 month ago
It was interesting that the poster vibe-coded (I'm assuming) the CTL from scratch; Claude was probably pretty good at doing that, and that task could likely have been completed in an afternoon.
Pairing the CTL with the CLI makes sense, as that's the only way to gain feedback from the game. Claude can't easily do spatial recognition (yet).
A project like this would entirely depend on the game being open source. I've seen some very impressive applications of AI online with closed-source games and entire algorithms dedicated to visual reasoning.
I'm still trying to figure out how this guy: https://www.youtube.com/watch?v=Doec5gxhT_U
Was able to have AI learn to play Mario Kart nearly perfectly. I find his work to be very impressive.
I guess because RCT2 is more data-driven than visually challenging, this solution works well, but having an LLM try to play a racing game sounds like it would be disastrous.
tadfisher|1 month ago
deadbabe|1 month ago
It is a curiosity, good for headlines, but the takeaway is if you really need an actual good AI, you are still better off not using an LLM powered solution.
colesantiago|1 month ago
And these are the same people that put countless engineers through gauntlets of bizarre interview questions and exotic puzzles to hire engineers.
But when it comes to C++ just vibe it obviously.
falloutx|1 month ago
equinumerous|1 month ago
equinumerous|1 month ago
mentos|1 month ago
> The park rating is climbing. Your flagship coaster is printing money. Guests are happy, for now. But you know what's coming: the inevitable cascade of breakdowns, the trash piling up by the exits, the queue times spiraling out of control.
karanveer|1 month ago
I've been trying to locate the dev of this game since a long time, so I can thank them for an amazing experience.
If anyone knows their social or anything, please do share, including OP.
Also, nice work on CC in this. May actually be interested in Claude Code now.
kinduff|1 month ago
I find this very interesting of us humans interacting with AIs.
js4ever|1 month ago
neom|1 month ago
alt227|1 month ago
A linear puzzle game like that I would just expect the ai to fly through first time, considering it has probably read 30 years of guides and walkthroughs.
skybrian|1 month ago
joshribakoff|1 month ago
Gemini models are a little bit better about spatial reasoning, but we’re still not there yet because these models were not designed to do spatial reasoning they were designed to process text
In my development, I also use the ascii matrix technique.
neonmagenta|1 month ago
vermilingua|1 month ago
petcat|1 month ago
pbmonster|1 month ago
I see no reason why AoE2 would be any different.
Worth noting that openAI Five was mostly deep reinforcement learning and massive distributed training, it didn't use image to text and an LLM for reasoning about what it sees to make its "decisions". But that wouldn't be a good way to do an AI like that anyway.
Oh, and humans still play Dota. It's still a highly competitive community. So that wasn't destroyed at all, most teams now use AI to study tactics and strategy.
bawolff|1 month ago
ddtaylor|1 month ago
HelloUsername|1 month ago
seu|1 month ago
When I read things like this, I wonder if it's just me not understanding this brave new world, or half of AI developers are delusional and really believe that they are dealing with a sentient being.
bspammer|1 month ago
vinyl7|1 month ago
sriram_sun|1 month ago
Am I reading a Claude generated summary here?
alt227|1 month ago
> "This was surprising, but fits with Claude's playful personality and flexible disposition."
afro88|1 month ago
I would take any descriptions like "comprehensive", "sophisticated" etc with a massive grain of salt. But the nuts and bolts of how it was done should be accurate.
rnmmrnm|1 month ago
blibble|1 month ago
not just make up bullshit about events
azhenley|1 month ago
Bluescreenbuddy|1 month ago
bawolff|1 month ago
joshcsimmons|1 month ago
emeril|1 month ago
pretty heavy/slow javascript but pretty functional nonetheless...
fuzzy_lumpkins|1 month ago
nacozarina|1 month ago
mcphage|1 month ago
Deukhoofd|1 month ago
An LLM could potentially make events far more aimed at your character, and could actually respond to things happening in the world far more than what the game currently does. It could really create some cool emerging gameplay.
huflungdung|1 month ago
[deleted]
Kapura|1 month ago
i enjoy playing video games my own self. separately, i enjoy writing code for video games. i don't need ai for either of these things.
gordonhart|1 month ago
bigyabai|1 month ago
It's still a neat perspective on how to optimize for super-specific constraints.
rangestransform|1 month ago
markbao|1 month ago
unknown|1 month ago
[deleted]
echelon|1 month ago
It's kind of like how people started watching Let's Plays and that turned into Twitch.
One of the coolest things recently is VTubers in mocap suits using AI performers to do single person improv performances with. It's wild and cool as hell. A single performer creating a vast fantasy world full of characters.
LLMs and agents playing Pokemon and StarCraft? Also a ton of fun.
jsbisviewtiful|1 month ago