One thing that is rarely mentioned, but usually practiced: Unix tools rarely modify its own state. The only way to modify the behaviour of a program is to pass parameters, set an environment variable, or modify a configuration file. This lends a great degree of predictability to how programs will behave. If you pipe something into grep, sed, awk, etc. you know how it will behave. It sounds like the article's author is not just ignoring that aspect of the Unix philosophy, but contradicting it.
I'm not exactly sure what the author is arguing for. Perhaps they have a vision, and perhaps it is a vision that has utility. That said, I do not see how their words fit into a modernized version of the Unix philosophy.
Im a big fan of the Unix philosophy. But this article is not resonating with me. It reads like good engineering (Unix) mixed with meta physics bordering on some sort of new age spiritualism.
If anything, I see the dawn of LLMs as upending the “internet as we knew it” and also as having a step effect on the value of human literacy.
It's terrible, but the Unix philosophy leaves much to be desired. Like the example:
cat file | grep "foo" | awk '{print $2}'
That's terrible in this day and age. Text streams are terrible to work with in modern times. We've long moved past /etc/passwd-like formats, and even with those such tooling has been extremely subpar. What if you put a colon in a file with colon separated fields?
There's no end to the nonsense you have to deal with that just shouldn't even be a problem in 2025.
Reading the comments letting me re-live my early years of being an engineer, in a hardcore Unix kernel company. For every line of code, there are more neys than yes, but we still have finish the code and push forward. Next time we shall bring beers and chips.
Just using prompts like Unix commands in pipelines takes you a long way at least conceptually. Can have AI choose the best data format from stage to stage although all formats can be represented as text. Not sure if there's much difference between another pipeline stage and another paragraph in the prompt. Maybe have AI decide when a particular prompt is nearing the token limit of its model and split the task into multiple stages. AI can also design the pipeline - decide what each stage should do.
[+] [-] II2II|9 months ago|reply
I'm not exactly sure what the author is arguing for. Perhaps they have a vision, and perhaps it is a vision that has utility. That said, I do not see how their words fit into a modernized version of the Unix philosophy.
[+] [-] nalaginrut|9 months ago|reply
[+] [-] travisgriggs|9 months ago|reply
If anything, I see the dawn of LLMs as upending the “internet as we knew it” and also as having a step effect on the value of human literacy.
[+] [-] dale_glass|9 months ago|reply
There's no end to the nonsense you have to deal with that just shouldn't even be a problem in 2025.
[+] [-] chubot|9 months ago|reply
If you want to make an "AI dev philosophy", sure go ahead
But it has nothing to do with the Unix philosophy
[+] [-] olemindgv|9 months ago|reply
[+] [-] dave333|9 months ago|reply
[+] [-] inftech|9 months ago|reply
[+] [-] Legion|9 months ago|reply
[+] [-] unknown|9 months ago|reply
[deleted]
[+] [-] compyman|9 months ago|reply
[deleted]
[+] [-] tomalbrc|9 months ago|reply
[deleted]
[+] [-] TimSchumann|9 months ago|reply
[deleted]
[+] [-] kreetx|9 months ago|reply
[+] [-] detourdog|9 months ago|reply