(no title)
physicles | 1 month ago
I partially agree. While LLMs don't magically increase a human's mental capacity, but they do allow a given human to explore the search space of e.g. abstractions faster than they otherwise could before they run out of time or patience.
But (to use GGP's metaphor) do LLMs increase the ultimate height of the software mountain at which complexity grinds everything to a halt?
To be more precise, this is point at which the cost of changing the system gets prohibitively high because any change you make will likely break something else. Progress becomes impossible.
Do current LLMs help us here? No, they don't. It's widely known that if you vibe code something, you'll pretty quickly hit a wall where any change you ask the LLM to make will break something else. To reliably make changes to a complex system, a human still needs to really grok what's going on.
Since the complexity ceiling is a function of human mental capacity, there are two ways to raise that ceiling:
1. Reduce cognitive load by building high-leverage abstractions and tools (e.g. compilers, SQL, HTTP)
2. Find a smarter person/machine to do the work (i.e. some future form of AI)
So while current LLMs might help us do #1 faster, they don't fundamentally alter the complexity landscape, not yet.
dcre|1 month ago
https://bsky.app/profile/sunshowers.io/post/3mbcinl4eqc2q
https://bsky.app/profile/sunshowers.io/post/3mbftmohzdc2q
https://bsky.app/profile/sunshowers.io/post/3mbflladlss26