(no title)
Lyngbakr | 2 months ago
For me, part of creating "perfect" software is that I am very much the one crafting the software. I'm learning while creating, but I find such learning is greatly diminished when I outsource building to AI. It's certainly harder and perhaps my software is worse, but for me the sense of achievement is also much greater.
anon7000|2 months ago
The author is saying that “perfect software” is like a perfect cup of coffee. It’s highly subjective to the end user. The perfect software for me perfectly matches how I want to interact with software. It has options just for me. It’s fine tuned to my taste and my workflows, showing me information I want to see. You might never find a tool that’s perfect for you because someone else wrote it for their own taste.
LLMs come in because it wildly increases the amount of stuff you can play around with on a personal level. It means someone finally has time to put together the perfect workflow and advanced tools. I personally have about 0 time outside of work that I can invest in that, so I totally buy the idea that LLMs can really give people the space to develop personal tools and workflows that work perfectly for them. The barrier to entry and experimentation is incredibly low, and since it’s just for you, you don’t need to worry about scale and operations and all the hard stuff.
There is still plenty of room for someone to do it by hand, but I certainly don’t have time to do that. So I’ll never find perfect software for some of my workflows unless I get an assist from LLMs.
I agree with you about learning and achievement and fun — but that’s completely unrelated to the topic!
ggauravr|2 months ago
You hit on the key constraint: time. The point isn't that the use of LLMs specifically provides agency, but that it lowers the barrier, allowing us to build things that bring it. "Perfect software" is perfect not just because of what they do, but because of what it lacks (fluff, tracking, features we don't need).
analogpixel|2 months ago
A lot of the time, the LLM outputs the code, I test my idea, and realize I really don't care or the idea wasn't that great, and now I can move on to something else.
resonious|2 months ago
conartist6|2 months ago
Why is this, idunno a better way to say it, good?
So ok you don't get into the weeds and you're proud of that, but also nothing you can think of wanting to do turns out to be worth doing.
Those things are wholly related. Opportunity never comes exactly the time and the way you expect. You have to be open to it, you have to be seeking out new experiences and new ideas. You have to get into the weeds and try things without being entirely sure what the outcome might be, what insight you might gain, or when that insight might become useful.
6510|2 months ago
Github is full of half forgotten saved games waiting for money to be thrown at them.
dotancohen|2 months ago
I did vibe code the first version. It runs, but it is utterly unmaintainable. I'm now rewriting it using the LLM as if it were a junior or outsourced programmer (not a developer, that remains my job) and I go over every line of application code. I love it, I'm pushing out decent quality code and very focused git commits. I write every commit message myself, no LLM there. But I don't even bother checking the LLM's unit and integration tests.
I would have never gotten to this stage of my dream project without AI tooling.
ggauravr|2 months ago
However, I don’t think using LLMs has to be an all-or-none proposition. You can still choose to build the parts you most care about yourself (where the learning happens) and delegate the other aspects to AI.
In the case of the text justifier, it was a small nuisance I wanted solved with very little effort. I didn't care about the browser APIs, just the visual outcome, so I let the LLM do it all.
If I were building something more complex, I would use LLMs much more mindfully. The value is in having the choice to delegate the chores so you can focus on the craft where it matters to you.
While we might value the process differently, the broader point remains that these tools enable people to build things they otherwise wouldn't have the time or specific resources to create, and still feel a sense of agency and ownership.
bitwize|2 months ago
I remember some of the early phases of home computing. The whole point of owning a home computer was that in addition to using other people's software, you could write your own and put the machine to whatever use you could think of. And it was a machine you owned, not time on some big company's machine which, ultimately, was controlled, and uses approved, by that company. The whole point of the home computing market was to create an environment where people managed the machines, not the other way around. (Wozniak has said that this was one of his motivations for creating the Apple I and II.)
Now we have people like this guy who say we finally have autonomy in computing—by purchasing time on some big company's machine doing numberwang to write the software for you. Ultimately the big company, not you, controls the machine and the uses to which it may be put. What's worse is these companies are buying up all the manufacturing capacity, starving the consumer market and making it more difficult to acquire computing hardware! No, this is not the autonomy envisioned by Wozniak, Jobs, or even a young shithead Bill Gates.
lioeters|2 months ago
Large language models, the resources and the exploitative means it took to create them, are not "free", they have serious social costs and loss of personal freedom. I still use them, particularly local models, but even that is questionable. At least when the AI bubble bursts and the inevitable enshittification begins, I will be able to continue running them without further vendor lock-in or erosion of privacy.
In terms of bootstrappability and supply chain risk, LLMs fail because we the people are not able to re-create them from scratch.
PaulRobinson|2 months ago
The first time I saw a computer, I saw a machine for making things. I once read a quote from actor Noel Coward who said that television was "for appearing on, not watching", and I immediately connected it to my own relationship with computers.
I don't want an LLM to write software or blog posts for me, for the same reason I don't want to hire an intern to do that for me: I enjoy the process.
Everything else, I'm in agreement on. Writing software for yourself - and only for yourself - is a wonderful superpower. You can define the ergonomics for yourself. There's lots of things that make writing software a little painful when you're the only customer: UX learning curves flatten, security concerns diminish a little, subscription costs evaporate...
I actually consider the ability to write software for yourself a more profound and important right over anything the open source movement offers. Of course, I want an environment which makes that easier, so it's this that makes me more concerned about closed ecosystems.
PunchyHamster|2 months ago
that being said calling it "perfect" is on the nose, at least for my own, it does a thing, it does it good enough, and that's all. It could be better but it won't be because it's not worth it, because it's good enough
sigmarule|2 months ago
Lyngbakr|2 months ago
grugagag|2 months ago
dotancohen|2 months ago
saagarjha|2 months ago
unknown|2 months ago
[deleted]
browningstreet|2 months ago