top | item 43097651

(no title)

yoz | 1 year ago

Disagree. Some abstractions are still vital, and it's for the same reasons as always: communicate purpose and complexity concisely rather than hiding it.

The best code is that which explains itself most efficiently and readably to Whoever Reads It Next. That's even more important with LLMs than with humans, because the LLMs probably have far less context than the humans do.

Developers often fall back on standard abstraction patterns that don't have good semantic fit with the real intent. Right now, LLMs are mostly copying those bad habits. But there's so much potential here for future AI to be great at creating and using the right abstractions as part of software that explains itself.

discuss

order

balls187|1 year ago

I’ve thought about your comment, and I think we’re both right.

Fundamentally, computers are a series of high and low voltages, and everything above that is a combination of abstraction and interpretations.

Fundamentally there will always be some level of this-it’s not like an A(G)I will interface directly using electrical signals (though in some distant future it could).

However from what I believe, this current phase of AI (LLMs + Generators + Tools) are showing that computers do not need to solve problems the same way humans need to because computers face different constraints.

So abstractions that programmers utilize to manage complexity won’t be necessary (in some future time).