top | item 46946000

(no title)

planb | 20 days ago

This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.

But someone designed the abstraction (e.g. the Wifi driver, the processor, the transistor), and they made sure it works and provides an interface to the layers above.

Now you could say a piece of software completely written by a coding agent is just another abstraction, but the article does not really make that point, so I don't see what message it tries to convey. "I don't understand my wifi driver, so I don't need to understand my code" does not sound like a valid argument.

discuss

order

mamcx|20 days ago

> This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.

The big problem is that now exist an actual risk most will never be able to MAKE abstractions. Sure, lets be on the shoulders of the giants but before IA most do some extra work and flex their brains.

Everyone make abstractions, and hide the "accidental complexity" for my current task is good, but I should deal with the "necessary complexity" to say I have, actually, done a job.

If is only being a dumb pipe...

1718627440|19 days ago

> Now you could say a piece of software completely written by a coding agent is just another abstraction,

Abstractions come with both syntactic and semantic behaviour specifications. In other words their implementation can have bugs. An LLM never has a bug, it always produces "something", whether this is what you wanted is on you to verify.

ragall|20 days ago

> Now you could say a piece of software completely written by a coding agent is just another abstraction

You're almost there. The current code-generating LLMs will be a dead end because it takes more time to thoroughly review a piece of code than to generate it, especially because LLM code is needlessly verbose.

The solution is to abandon general-purpose languages and start encapsulating the abstraction behind a DSL, which is orders of magnitude more restricted and thus simpler than a general-purpose language, making it much more amenable to be controlled through an LLM. SaaS companies should go from API-first to DSL-first, in many cases more than one DSL: e.g. a blog-hosting company would have one DSL for the page layouts, one for controlling edits and publishing, one for asset manipulation pipelines, one for controlling the CDN, etc... Sort of IaC, you define a desired outcome, and the engine behind takes care of actuating it.

order-matters|20 days ago

I agree. Additionally, a company can own and update a business language of their own design at their own pace and need. Then they can use AI to translate from their controlled business language to the DSL needed (translation being an area it actually does well). In this way the LLM would only ever be going from General -> specific, which should keep it on the rails, and the business can keep its business logic stored

Now that said, there is still the actual engineering problem of leveraging the capabilities of the underlying technology. For example, being able to map your 4 core program to a 16 core system and have it work is one thing, actually utilizing 16 cores is another. Extend to all technological advancements

ebhn|20 days ago

I like this direction, but I worry about developers involvement in the design of the DSL becoming the new bottleneck with the same problems. The code which becomes the guardrails cannot just be generated slop, it should be thoroughly designed and understood imo

Quothling|20 days ago

To be fair to AI, it's not like Clean Code and it's OOP cult weren't already causing L1-3 cache misses by every abstraction and how they spread their functions out over multiple files. I'm not sure AI can really make it worse than that, and it's been a golden standard in a lot of places for 25 years. For the most part it doesn't matter, in most software it'll cost you a little extra on compute but rarely noticible. If you're writing software for something important though, like one of those astractions you talk about, then it's going to travel through everything. Making it even more important to actually know what you're building upon.

Still, I'm not convinced AI is necessarily worse at reading the documentation and using the abstractions correctly than the programmers using the AI. If you don't know what you're doing, then does it matter if you utilise an AI instead of google programming?