(no title)
wolframhempel | 11 months ago
Over the centuries we’ve lost and gained a lot of standalone skills. Most people throughout history would scoff at my poor horse-riding, sword fighting or my inability to navigate by the stars.
My logic, reasoning and oratory abilities on the other hand, as well as my understanding of fundamental mechanics and engineering principles would probably hold up quite well (language barrier notwithstanding) back in ancient Greece or in 18th century France.
I believe AI is fine to use for standalone skills in programming. Writing isolated bits of logic, e.g. a getRandomHexColor() function in JavaScript or a query in an SQL dialect you’re not deeply familiar with is a great help and timesaver.
On the other hand, handing over the fundamental architecture of your project to an AI will erode your foundational problem solving and software design abilities.
Fortunately, AI is quite good at the former, but still far from being able to do the latter. So, to me at least, AI based code editors are helpful without the risk of long term skill degradation.
globular-toast|11 months ago
Too many people think what I do is "write code". That is incorrect. What I do is listen, read, watch and think. If code needs writing then it already basically writes itself because at that point I've already done the thinking. The typing part is an inconvenience that I'd happily give up if I could get my thoughts into the computer directly somehow.
AI tools make the easy stuff easier. They don't help much with hard stuff. The most useful thing I've found them for is getting an initial orientation in a completely unfamiliar area. But after that, when I need hard details, it's books, manuals, blogs etc just like before. I find juniors are already lacking in their ability to find and assimilate knowledge and I feel like having AI isn't going to help here.
namaria|11 months ago
This is why I don't see LLM assisted coding as revolutionary. At best I think it's a marginal improvement on indexing, search and code completion as they have existed for at least a decade now.
NLP is a poor medium for specifying abstract symbolic systems. And LLMs work by finding patterns in latent space, I think. But the latent space doesn't represent reality, it represents language as recorded in the training data. It's easy to underestimate just how much training data were used for the current state-of-the-art foundational models. And it's easy to overestimate the ability these tools have to weave language and by induction attribute reasoning abilities to them.
The intuition I have about these LLM-driven tools is that we're adding degrees of freedom to the levers we use. When you're near an attractor congruent with your goals it feels like magic. But I think this is over fitting: the things we do now are closely mirrored by the data we used to train these models. But as we move forward in terms of tooling, domains, technology, culture etc, the data available will become increasingly obsolete, relevant data increasingly scarce.
Besides there's the problem of unknown unknowns: lots of people using these tools are assuming that the attractors they see pulling on their outcome is adequate because they can only see some arbitrary surface of it. And since they don't know what geometries lie beneath, they end up creating and exposing systems with several unknown issues that might have implications in security, legality, morality, etc. And since there's a time delay between their feeling of accomplishment and the surfacing of issues, and they will be likely to use the same approach, we might be heading for one hell of a bullwhip effect across dimension we can't anticipate at all.
Arisaka1|11 months ago
I understand what you mean, but for some reason I cannot imagine my younger self getting into his first programming practice, going "ugh, why do I have to type this? Why can't I just think and let the computer do it for me". I don't think I would've reached where I am if I saw the act of practice as a tedium that I wish to get it removed.
You probably see it like that because you're not that kid anymore, and for today's "you" code is just a means to provide and nothing more.
flowerthoughts|11 months ago
Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."
My point being that theory (and thus what is considered foundational) has progressed as well.
politelemon|11 months ago
Some better more suitable examples would be warranted here, none of these were as widespread or common as you'd assume. Little to no metaphorical scoffing would happen for those. Now, sewing and darning, and subsistence, while mundane, are uncommon for many of us.
sshine|11 months ago
codebra|11 months ago
sceptic123|11 months ago
LiKao|11 months ago
It's like NP. Solving an NP problem is very hard. Verifying that the solution is correct is very easy.
You might not know the statements required, but once the AI reminds you of which statements are available, you can check the logic using these statements makes sense.
Yes, there is a pitfall of being lazy and forgetting to verify the output. That's where a lot of vibe coding problems come from in my opinion.