I do really like the Unix approach Claude Code takes, because it makes it really easy to create other Unix-like tools and have Claude use them with basically no integration overhead. Just give it the man page for your tool and it'll use it adeptly with no MCP or custom tool definition nonsense. I built a tool that lets Claude use the browser and Claude never has an issue using it.
How does Claude Code use the browser in your script/tool? I've always wanted to control my existing Safari session windows rather than a Chrome or a separate/new Chrome instance.
The light switch moment for me is when I realized I can tell claude to use linters instead of telling it to look for problems itself. The later generally works but having it call tools is way more efficient. I didn't even tell it what linters to use, I asked it for suggestions and it gave me about a dozen of suggestions, I installed them and it started using them without further instruction.
I had tried coding with ChatGPT a year or so ago and the effort needed to get anything useful out of it greatly exceeded any benifit, so I went into CC with low expectations, but have been blown away.
My mind was blown when claude randomly called adb/logcat on my device connected via usb & running my android app, ingesting the real time log streams to debug the application in real time. Mind boggling moment for me. All because it can call "simple" tools/cli application and use their outputs. This has motivated me to adjust some of my own cli applications & tools to have better input, outputs and documentation, so that claude can figure them out out and call them when needed. It will unlock so many interesting workflows, chaining things together (but in a clever way).
I have some repair shop experience, and in my experience, a massive bottleneck in repairing truly complex devices is diagnostics. Often, things are "repaired" by swapping large components until the issue goes away, because diagnosing issues in any more detail is more of an arcane art than something you can teach an average technician to do.
And I can't help but think: what would a cutting edge "CLI ninja" LLM like Claude be able to do if given access to a diagnostic interface that exposes all the logs and sensor readings, a list of known common issues and faults, and a full technical reference manual?
All GUI apps are different, each being unhappy in its own way. Moated fiefdoms they are, scattered within the boundaries of their operating system. CLI is a common ground, an integration plaza where the peers meet, streams flow and signals are exchanged. No commitment needs to be made to enter this information bazaar. The closest analog in the GUI world is Smalltalk, but again - you need to pledge your allegiance before entering one.
Just because a popular new tool runs in the terminal, doesn't make it a shining example for the "Unix philosophy" lol.
the comparison makes no sense if you think about it for more than 5 seconds and is hacker news clickbait you and i fell for :(
1. Small programs that do a single thing and are easy to comprehend.
2. Those programs integrate with one another to achieve more complex tasks.
3. Text streams are the universal interface and state is represented as text files on disk.
Sounds like the UNIX philosophy is a great match for LLMs that use text streams as their interface. It's just so normalized that we don't even "see" it anymore. The fact that all your tools work on files, are trivially callable by other programs with a single text-based interface of exec(), and output text makes them usable and consumable by an LLM with nothing else needed. This didn't have to be how we built software.
The Unix philosophy here is less about it being a terminal app (it's a very rich terminal app, lots of redrawing the whole screen etc) and more about the fact that giving a modern LLM the ability to run shell commands unlocks an incredibly useful array of new capabilities.
An LLM can do effectively anything that a human can do by typing commands into a shell now.
I don't remember any advanced computer user, including developers saying that the CLI is dead.
The CLI has been dead for end-users since computers became powerful enough for GUIs, but the CLI has always been there behind the scenes. The closest we have been to the "CLI is dead" mentality was maybe in the late 90s, with pre-OSX MacOS and Windows, but then OSX gave us a proper Unix shell, Windows gave us PowerShell, and Linux and its shell came to dominate the server market.
I think it might loop back around pretty quick. I've been using it to write custom GUI interfaces to streamline how I use the computer, I'm working piecemeal towards and entire desktop environment custom made to my own quirky preferences. In the past a big part of the reason I used the terminal so often for basic things was general frustration and discomfort using the mainstream GUI tools, but that's rapidly changing for me.
I implore people who are willing and able to send the contents and indices of their private notes repository to cloud based services to rethink their life decisions.
Not around privacy, mind you. If your notes contain nothing that you wouldn’t mind being subpoenaed or read warrantlessly by the DHS/FBI, then you are wasting your one and only life.
My experience has been the opposite — a shell prompt is too many degrees of freedom for an LLM, and it consistently misses important information.
I’ve had much better luck with constrained, structure tools that give me control over exactly how the tools behave and what context is visible to the LLM.
It seems to be all about making doing the correct thing easy, the hard things possible, and the wrong things very difficult.
I've done exactly this with MCP
{
"name": "unshare_exec",
"description": "Run a binary in isolated Linux namespaces using unshare",
"inputSchema": {
"type": "object",
"properties": {
"binary": {"type": "string"},
"args": {"type": "array", "items": {"type": "string"}}
},
"required": ["binary"],
"additionalProperties": false
}
}
It started as unshare and ended up being a bit of a yakshaving endeavor to make things work but i was able to get some surprisingly good results using gemma3 locally and giving it access to run arbitrary debian based utilities.
This really resonated with me, it's echoing the way i've come to appreciate Claude-code in a terminal for working in/on/with unix-y hosts.
A trick I use often with this pattern is (for example):
'you can run shell commands. Use tmux to find my session named "bingo", and view the pane in there. you can also use tmux to send that pane keystrokes. when you run shell commands, please run them in that tmux pane so i can watch. Right now that pane is logged into my cisco router..."
One point to add about integration between LLMs and Obsidian: plugins.
Obsidian has a plugin system that can be easily customized. You can run your own JS scripts from a local folder. Claude Code is excellent at creating and modifying them on the fly.
For example, I built a custom program that syncs Obsidian files with a publish flag to my Github repo, which triggers a netlify build. My website updates when I update my vault and run a sync.
I do the same with Emacs and howm-mode. My system has improved a lot since I started using Claude Code. Now I implemented all the features I missed from Evernote!
LLMs are making open source programs both more viable and more valuable.
I have many programs I use that I wish were a little different, but even if they were open source, it would take a while to acquaint myself with the source code organization to make these changes. LLMs, on the other hand, are pretty good at small self-contained changes like tweaks or new minor features.
This makes it easier to modify open source programs, but also means that if a program isn't open source, I can't make these changes at all. Before, I wasn't going to make the change anyway, but now that I actually can, the ability to make changes (i.e. the program is open source) becomes much more important.
Open-weights only are also not enough, we need control of the dataset and training pipeline.
The average user like me wouldn't be able to run pipelines without serious infrastructure, but it's very important to understand how the data is used and how the models are trained, so that we own the model and can assess its biases openly.
Codex CLI had some huge upgrades in the past few months.
Before the GPT-5 release it was a poor imitation IMO - in the macOS terminal it somehow even disabled copy and paste!
Codex today is almost unrecognizable in comparison to that version. It's really good. I use both it and Claude Code almost interchangeably at the moment and I'm not really feeling that one is notably better than the other.
- significantly less obsequious (very few "you're absolutely right" that Claude vomits out on every interaction)
- less likely to forget and ignore context and AGENTS.md instructions
- fewer random changes claiming "now everything is fixed" in the first 30-50% of context
- better understanding of usage rules (see link below), one-shotting quite a few things Claude misses
Language + framework: Elixir, Phoenix LiveView, Ash, Oban, Reactor
SLoC: 22k lines
AGENTS.md: some simple instructions, pointing to two MCPs (Tidewave and Stripe), requirement to compile before moving onto next file, usage rules https://hexdocs.pm/usage_rules/readme.html
Codex with gpt-5-codex (high) is like an outsourced consultant. You give them the specs and a while later you get the output. It doesn't communicate much during the run (especially the VSCode plugin is really quiet).
Then you check the result and see what happened. It's pretty good at one-shotting things if it gets the gist, but if it goes off the rails you can't go back three steps and redirect.
On the other hand Claude Code is more like pair programming, it's chatting about while doing things, telling you what it's doing and why "out loud". It's easier to interrupt it when you see it going off track, it'll just stop and ask for more instructions (unlike Copilot where if you don't want it to rm the database.file you need to be really fast and skip the operation AND hit the stop button below the chatbox).
I use both regularly, GPT is when I know what to do and have it typed out. Claude is for experimenting and dialogue like "what would be a good idea here?" type of stuff.
I've found GPT-5-Codex (the model used by default by OpenAI Codex CLI) to be superior but, as others have stated, slower.
Caveat, requires a linux environment, OSX, or WSL.
In general, I find that it will write smarter code, perform smarter refactors, and introduce less chaos into my codebase.
I'm not talking about toy codebases. I use agents on large codebases with dozens of interconnected tools and projects. Claude can be a bit of a nightmare there because it's quite myopic. People rave about it, but I think that's because they're effectively vibe-coding vastly smaller, tight-scoped things like tools and small websites.
On a larger project, you need a model to take the care to see what existing patterns you're using in your code, whether something's already been done, etc. Claude tends to be very fast but generate redundant code or comical code (let's try this function 7 different ways so that one of those ways will pass). This is junior coder bullshit. GPT-5-Codex isn't perfect but there's far far less of that. It takes maybe 5x longer but generates something that I have more confidence in.
I also see Codex using tools more in smart ways. If it's refactoring, it'll often use tools to copy code rather than re-writing the code. Re-writing code is how so many bugs have been introduced by LLMs.
I've not played with Sonnet 4.5 yet so it may have improved things!
I wouldn’t say it’s particularly good, at least not based on my limited experience. While some people argue it’s significantly better, I’ve found it to be noticeably slower than Claude, often taking two to three times longer to complete the same tasks. That said, I’m open to giving it another try when I have more time; right now my schedule is too tight to accommodate its slower pace.
There's something deeply hypocritical about a blog that criticizes the "SaaS Industrial Complex"[1], while at the same time praising one of the biggest SaaS in existence, while also promoting their own "AI-first" strategy and marketing company.
What even is this? Is it all AI slop? All of these articles are borderline nonsensical, in that weird dreamy tone that all AI slop has.
To see this waxing poetic about the Unix philosophy, which couldn't be farther from the modern "AI" workflow, is... something I can't quite articulate, but let's go with "all shades of wrong". Seeing it on the front page of HN is depressing.
Of note: Occasionally I ask a FreeBSD question. Claude (and I think others) insist on using Bash even though I've told it for quite a long time now that FreeBSD does not natively use or install Bash. For which is humbly apologizes but will continue to do so on the next question.
That's how describe AI code agents to non-coders. If you ask it what 2 + 2 is, it will say "5". And if you tell it that's wrong and ask it to do it again, it will say, "You're right! My last answer was incorrect. The correct answer is 2 + 2 is 5."
The Claude and Obsidian combo is great. You can offload all the hard parts of managing the workflow to the robots. I've taken to analyzing my daily notes—a stream-of-consciousness mind dump—for new Zettel notes, projects, ideas, and so on. Gemini does just fine, too, though.
No mention/comparison to Gemini CLI? Gemini CLI is awesome and they just added a kind of stealth feature for Chrome automation. This capability was first announced as Project Mariner, and teased for eventual rollout in Chrome, but it's available right now for free in Gemini CLI.
In my experience of trying to do things with gemini cli and claude code, claude code was always significantly smarter. gemini cli makes so many mistakes and then tries hard to fix them (in my applications at least).
Tbf I haven't played much with it, but I have generally found that I don't like the permission model on Gemini CLI or Codex anywhere near as much as Claude Code.
>The filesystem is a great tool to get around the lack of memory and state in LLMs and should be used more often.
This feels a bit like rediscovering stateless programming. Obviously the filesystem contents can actually change, but the idea of an idempotent result when running the same AI with the same command(s) and getting the same result would be lovely. Even better if the answer is right.
If it's consistent one way or the other it would be great: consistently wrong, correct it, consistently right, reward it. It's the unpredictability and inconsistency that's a problem.
Yeah absolutely, being so close to the filesystem gets Claude Code the closest experience I've had with an agent that can actually get things done. Really all the years of UIs we've created for each other just get in the way of these systems, and on a broader scale it will probably be more important than ever to have a reasonable API in your apps.
The author may like https://omnara.com/ (no affiliation) instead of SSH-ing from their phone. I have a similar setup with Obsidian and a permanently-on headless Claude Code for my PKM that I can access through the phone app.
Unix tools let agents act & observe in many versatile ways. That lets them close their loops. Taking yourself out of the loop lets your agent work far more efficiently.
But anything you can do on the CLI, so can an agent. It’s the same thing as chefs preferring to work with sharp knives.
A few days ago I read an article from humnanlayer. They mentioned shipping a weeks worth of collaborative work in less than a day. That was one data point on a project.
- Has anyone found claude code been able to documentation for parts of the code which does not:
(a). Explode in maintenance time exponentially to help claude understand and iterate without falling over/hallucinating/design poorly?
(b). Use it to make code reviewers life easy? If so how?
I think the key issue for me is the time the human takes to *verify*/*maintain* plans is not much less than what it might take them to come up with a plan that is detailed enough that many AI models could easily implement.
It is pretty tiresome with the hype tweets and not being able to judge the vibe code cruft and demoware factor.
Especially on bootstrap/setup, AIs are fantastic for cutting out massive amounts of time, which is a huge boon for our profession. But core logic? I think that's where the not-really-saving-time studies are coming from.
I'm surprised there aren't faux academic B-school productivity studies coming out to counter that (sponsored by AI funding of course) already, but then again I don't read B-school journals.
I actually wonder if the halflife decay of the critical mass of vibecode will almost perfectly coincide with the crash/vroosh of labor leaving the profession to clean it up. It might be a mini-y2k event, without such a dramatic single day.
Yep. If you're already familiar with Unix, Claude Code doesn't even seem that amazing. The idea of composing simple things together into data pipelines is incredibly powerful. It seems every generation rediscovers this. Kleppmann wrote about it in his book. ESR has a whole book about it.
This says Claude Code but seems like it would apply to Gemini CLI (and its clones like iflow, qwen), opencode, aider etc too as well as work with any decent model out there. I haven't used claude code but these CLIs and models (deepseek, qwen, kimi, gemini, glm, even grok) are quite capable.
Well, no, they aren't, but the orchestration frameworks in which they are embedded sometimes are (though a lot of times a whole lot of that everything is actually done by separate binaries the framework is made aware of via some configuration or discovery mechanism.)
sure, but that's not what we're talking about here.
the article is framing LLM's as a kind of fuzzy pipe that can automatically connect lots of tools really well. This ability works particularly well with unix-philosophy do-one-thing tools, and so being able to access such tools opens a superpower that is unique and secretly shiny about claudecode that browser-based chatgpt doesn't have.
It's more like a fluid/shapeless orchestrator that fuzzily interfaces between human and machine language, arising momentarily from a vat to take the exact form that fits the desired function, then disintegrates until called upon again.
A CLI might be the most information theoretically efficient form of API, significantly more succinct than eg. JSON based APIs. It's fitting that it would be optimal for Claude Code given the origin of the name "Claude".
> Anyone who can't find use cases for LLMs isn't trying hard enough
That's an interesting viewpoint from an AI marketing company.
I think the essential job of marketing is to help people make the connection between their problems and your solutions. Putting all on them in a kind of blamey way doesn't seem like a great approach to me.
I read the whole thing and could still not figure out what they’re trying to solve. Which I’m pretty sure goes against the Unix philosophy. The one thing should be clearly defined to be able to declare that you solve it well.
That's fair. But it's what I believe. I spend a lot of time inside giant companies and there are too many people waiting for stone tablets to come down the mountain with their use cases instead of just playing with this stuff.
You're supposed to start with a use case that is unmet, and research/build technology to enable and solve the use case.
AI companies are instead starting with a specific technology, and then desperately searching for use cases that might somehow motivate people to use that technology. Now these guys are further arguing that it should be the user's problem to find use cases for the technology they seem utterly convinced needs to be developed.
I disagree actually. Saying things like “everyone else managed to figure it out” is a way of creating FOMO. It might not be the way you want to do it, marketing doesn’t have to be nice (or even right) to work.
Is "finding a way to remove them, with prejudice, from my phone" a valid use case for them? I'm tired of Gemini randomly starting up.
(Well, I recently found there is a reason for it: I'm left handed and unlocking my phone with my left hand sometimes touch the icon stupidly put by default on the lock screen. Not that it would work: My phone is usually running with data disabled.)
nharada|5 months ago
kristopolous|5 months ago
https://github.com/day50-dev/Mansnip
wrapping this in an STDIO mcp is probably a smart move.
I should just api-ify the code and include the server in the pip. How hard could this possibly be...
hboon|5 months ago
the_real_cher|5 months ago
lupusreal|5 months ago
I had tried coding with ChatGPT a year or so ago and the effort needed to get anything useful out of it greatly exceeded any benifit, so I went into CC with low expectations, but have been blown away.
BatteryMountain|5 months ago
ACCount37|5 months ago
And I can't help but think: what would a cutting edge "CLI ninja" LLM like Claude be able to do if given access to a diagnostic interface that exposes all the logs and sensor readings, a list of known common issues and faults, and a full technical reference manual?
resonious|5 months ago
unknown|5 months ago
[deleted]
mike_ivanov|5 months ago
array_key_first|5 months ago
Really, GUIs can be formed of a public API with graphics slapped on top. They usually aren't, but they can be.
p_ing|5 months ago
Yet highly preferred over CLI applications to the common end user.
CLI-only would have stunted the growth of computing.
jadeopteryx|5 months ago
hoechst|5 months ago
Spivak|5 months ago
2. Those programs integrate with one another to achieve more complex tasks.
3. Text streams are the universal interface and state is represented as text files on disk.
Sounds like the UNIX philosophy is a great match for LLMs that use text streams as their interface. It's just so normalized that we don't even "see" it anymore. The fact that all your tools work on files, are trivially callable by other programs with a single text-based interface of exec(), and output text makes them usable and consumable by an LLM with nothing else needed. This didn't have to be how we built software.
simonw|5 months ago
An LLM can do effectively anything that a human can do by typing commands into a shell now.
mac-attack|5 months ago
scuff3d|5 months ago
Kim_Bruning|5 months ago
Now, due to tools like claude code, CLI is actually clearly the superior interface.
(At least for now)
It's not supposed to be an us vs them flamewar, of course. But it's fun to see a reversal like this from time to time!
GuB-42|5 months ago
The CLI has been dead for end-users since computers became powerful enough for GUIs, but the CLI has always been there behind the scenes. The closest we have been to the "CLI is dead" mentality was maybe in the late 90s, with pre-OSX MacOS and Windows, but then OSX gave us a proper Unix shell, Windows gave us PowerShell, and Linux and its shell came to dominate the server market.
lupusreal|5 months ago
sneak|5 months ago
Not around privacy, mind you. If your notes contain nothing that you wouldn’t mind being subpoenaed or read warrantlessly by the DHS/FBI, then you are wasting your one and only life.
warkdarrior|5 months ago
frumplestlatz|5 months ago
I’ve had much better luck with constrained, structure tools that give me control over exactly how the tools behave and what context is visible to the LLM.
It seems to be all about making doing the correct thing easy, the hard things possible, and the wrong things very difficult.
boredumb|5 months ago
It started as unshare and ended up being a bit of a yakshaving endeavor to make things work but i was able to get some surprisingly good results using gemma3 locally and giving it access to run arbitrary debian based utilities.
all2|5 months ago
I'm curious to see what you've come up with. My local LLM experience has been... sub-par in most cases.
ryancnelson|5 months ago
A trick I use often with this pattern is (for example): 'you can run shell commands. Use tmux to find my session named "bingo", and view the pane in there. you can also use tmux to send that pane keystrokes. when you run shell commands, please run them in that tmux pane so i can watch. Right now that pane is logged into my cisco router..."
SamPatt|5 months ago
Obsidian has a plugin system that can be easily customized. You can run your own JS scripts from a local folder. Claude Code is excellent at creating and modifying them on the fly.
For example, I built a custom program that syncs Obsidian files with a publish flag to my Github repo, which triggers a netlify build. My website updates when I update my vault and run a sync.
pqs|5 months ago
supportengineer|5 months ago
xorvoid|5 months ago
buzzy_hacker|5 months ago
I have many programs I use that I wish were a little different, but even if they were open source, it would take a while to acquaint myself with the source code organization to make these changes. LLMs, on the other hand, are pretty good at small self-contained changes like tweaks or new minor features.
This makes it easier to modify open source programs, but also means that if a program isn't open source, I can't make these changes at all. Before, I wasn't going to make the change anyway, but now that I actually can, the ability to make changes (i.e. the program is open source) becomes much more important.
gchamonlive|5 months ago
The average user like me wouldn't be able to run pipelines without serious infrastructure, but it's very important to understand how the data is used and how the models are trained, so that we own the model and can assess its biases openly.
eadmund|5 months ago
If only I were retired and had infinite time!
nwsm|5 months ago
noahbrier|5 months ago
CountGeek|5 months ago
BirAdam|5 months ago
smm11|5 months ago
sarbajitsaha|5 months ago
simonw|5 months ago
Before the GPT-5 release it was a poor imitation IMO - in the macOS terminal it somehow even disabled copy and paste!
Codex today is almost unrecognizable in comparison to that version. It's really good. I use both it and Claude Code almost interchangeably at the moment and I'm not really feeling that one is notably better than the other.
troupo|5 months ago
- significantly less obsequious (very few "you're absolutely right" that Claude vomits out on every interaction)
- less likely to forget and ignore context and AGENTS.md instructions
- fewer random changes claiming "now everything is fixed" in the first 30-50% of context
- better understanding of usage rules (see link below), one-shotting quite a few things Claude misses
Language + framework: Elixir, Phoenix LiveView, Ash, Oban, Reactor
SLoC: 22k lines
AGENTS.md: some simple instructions, pointing to two MCPs (Tidewave and Stripe), requirement to compile before moving onto next file, usage rules https://hexdocs.pm/usage_rules/readme.html
theshrike79|5 months ago
Then you check the result and see what happened. It's pretty good at one-shotting things if it gets the gist, but if it goes off the rails you can't go back three steps and redirect.
On the other hand Claude Code is more like pair programming, it's chatting about while doing things, telling you what it's doing and why "out loud". It's easier to interrupt it when you see it going off track, it'll just stop and ask for more instructions (unlike Copilot where if you don't want it to rm the database.file you need to be really fast and skip the operation AND hit the stop button below the chatbox).
I use both regularly, GPT is when I know what to do and have it typed out. Claude is for experimenting and dialogue like "what would be a good idea here?" type of stuff.
dudeinhawaii|5 months ago
Caveat, requires a linux environment, OSX, or WSL.
In general, I find that it will write smarter code, perform smarter refactors, and introduce less chaos into my codebase.
I'm not talking about toy codebases. I use agents on large codebases with dozens of interconnected tools and projects. Claude can be a bit of a nightmare there because it's quite myopic. People rave about it, but I think that's because they're effectively vibe-coding vastly smaller, tight-scoped things like tools and small websites.
On a larger project, you need a model to take the care to see what existing patterns you're using in your code, whether something's already been done, etc. Claude tends to be very fast but generate redundant code or comical code (let's try this function 7 different ways so that one of those ways will pass). This is junior coder bullshit. GPT-5-Codex isn't perfect but there's far far less of that. It takes maybe 5x longer but generates something that I have more confidence in.
I also see Codex using tools more in smart ways. If it's refactoring, it'll often use tools to copy code rather than re-writing the code. Re-writing code is how so many bugs have been introduced by LLMs.
I've not played with Sonnet 4.5 yet so it may have improved things!
jasonsb|5 months ago
imiric|5 months ago
What even is this? Is it all AI slop? All of these articles are borderline nonsensical, in that weird dreamy tone that all AI slop has.
To see this waxing poetic about the Unix philosophy, which couldn't be farther from the modern "AI" workflow, is... something I can't quite articulate, but let's go with "all shades of wrong". Seeing it on the front page of HN is depressing.
[1]: https://www.alephic.com/no-saas
assimpleaspossi|5 months ago
thinkingtoilet|5 months ago
luhsprwhkv2|5 months ago
phito|5 months ago
ilteris|5 months ago
xnx|5 months ago
sega_sai|5 months ago
noahbrier|5 months ago
unknown|5 months ago
[deleted]
tclancy|5 months ago
This feels a bit like rediscovering stateless programming. Obviously the filesystem contents can actually change, but the idea of an idempotent result when running the same AI with the same command(s) and getting the same result would be lovely. Even better if the answer is right.
lockedinsuburb|5 months ago
micromacrofoot|5 months ago
AlexCornila|5 months ago
paraknight|5 months ago
cadamsdotcom|5 months ago
But anything you can do on the CLI, so can an agent. It’s the same thing as chefs preferring to work with sharp knives.
itissid|5 months ago
- Has anyone found claude code been able to documentation for parts of the code which does not:
(a). Explode in maintenance time exponentially to help claude understand and iterate without falling over/hallucinating/design poorly?
(b). Use it to make code reviewers life easy? If so how?
I think the key issue for me is the time the human takes to *verify*/*maintain* plans is not much less than what it might take them to come up with a plan that is detailed enough that many AI models could easily implement.
AtlasBarfed|5 months ago
Especially on bootstrap/setup, AIs are fantastic for cutting out massive amounts of time, which is a huge boon for our profession. But core logic? I think that's where the not-really-saving-time studies are coming from.
I'm surprised there aren't faux academic B-school productivity studies coming out to counter that (sponsored by AI funding of course) already, but then again I don't read B-school journals.
I actually wonder if the halflife decay of the critical mass of vibecode will almost perfectly coincide with the crash/vroosh of labor leaving the profession to clean it up. It might be a mini-y2k event, without such a dramatic single day.
dannyobrien|5 months ago
globular-toast|5 months ago
dizhn|5 months ago
blibble|5 months ago
exact opposite of the unix philosophy
dragonwriter|5 months ago
Well, no, they aren't, but the orchestration frameworks in which they are embedded sometimes are (though a lot of times a whole lot of that everything is actually done by separate binaries the framework is made aware of via some configuration or discovery mechanism.)
floatrock|5 months ago
the article is framing LLM's as a kind of fuzzy pipe that can automatically connect lots of tools really well. This ability works particularly well with unix-philosophy do-one-thing tools, and so being able to access such tools opens a superpower that is unique and secretly shiny about claudecode that browser-based chatgpt doesn't have.
btbuildem|5 months ago
disconcision|5 months ago
marstall|5 months ago
BenoitEssiambre|5 months ago
Information theoretic efficiency seems to be a theme of UNIX architecture: https://benoitessiambre.com/integration.html.
user3939382|5 months ago
sakoht|5 months ago
jmull|5 months ago
That's an interesting viewpoint from an AI marketing company.
I think the essential job of marketing is to help people make the connection between their problems and your solutions. Putting all on them in a kind of blamey way doesn't seem like a great approach to me.
skydhash|5 months ago
noahbrier|5 months ago
ryandrake|5 months ago
You're supposed to start with a use case that is unmet, and research/build technology to enable and solve the use case.
AI companies are instead starting with a specific technology, and then desperately searching for use cases that might somehow motivate people to use that technology. Now these guys are further arguing that it should be the user's problem to find use cases for the technology they seem utterly convinced needs to be developed.
clickety_clack|5 months ago
Frenchgeek|5 months ago
(Well, I recently found there is a reason for it: I'm left handed and unlocking my phone with my left hand sometimes touch the icon stupidly put by default on the lock screen. Not that it would work: My phone is usually running with data disabled.)
vance1996|5 months ago
[deleted]
queenss90|5 months ago
[deleted]
richwisdomwise|5 months ago
[deleted]
ohmahjong|5 months ago