I'm a co-founder of Calcapp, an app builder for formula-driven apps, and I recently received an email from a customer ending their subscription. They said they appreciated being able to kick the tires with Calcapp, but had now fully moved to an AI-based platform. So we're seeing this reality play out in real time.
The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
Sorry to hear about the customer churn, but the MCP-first strategy makes sense to me and seems like it could be really powerful. I also suspect that the bring your own agent future will be really exciting, and I've been surprised we haven't seen more of it play out already.
Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.
I saw a Second Brain demo [0] using no-code with using AI inside the widgets to do the work. It took a little handholding, but it turned out to be very flexible, showing me a new way to look at the whole industry.
I highly suggest you expose functionality through Graphql. It lets users send out an agent with a goal like: "Figure out how to do X" and because graphql has introspection, it can find stuff pretty reliably! It's really lovely as an end user. Best of luck!
I think this view is really short-sighted. Low-code tools date back to the '80s, and the more likely outcome here is that low-code and agentic tools simply merge.
There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
The difficulty in the past has been 1) the amount of work it takes to build good direct manipulation tools - the level of detail you need to get to is overwhelming for most teams attempting it - but LLMs themselves make this a lot easier to build, and 2) what to do when users hit the inevitable gaps in your visual system. Now LLMs fill these gaps pretty spectacularly.
This makes the most sense to me too. My feeling is so-called AI is going to deliver on a lot of the things we're used to having shoddy versions of -- good natural language interfaces, good WYSIWYG type tools, all of this could turn the wix/squarespace/wordpress/etc landscape into something pretty good, rather than just OK.
In my most hopeful of futures, we've figured out how to do lightweight inference, and if the models don't run locally at least they aren't harming the planet, and all this AI tooling hydrates all the automation projects of the last 40 years so that my favorite tiny local music label can have a super custom online shop that works exactly the way they need without having to sacrifice significant income to do it.
I agree. I think that once your LLM hits a baseline level of computer science / programming "understanding" it can pretty easily work with whatever language. Using narrow DSLs and low code platforms could be a great way to constraint an LLM and keep it on the happy path.
> There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic
Yes. In the GIS industry for example, nothing has fundamentally changed with the introduction of LLMs. They may make the same processes more efficient e.g. though automated building of workflows. AI has significantly improved classification work of course but it's still using the same principles (we've been doing ML longer than most industries). Geocoding will get cheaper and easier but it's still geocoding.
GIS software allowing standard visualisation, export and map production will bet a lot better because of LLMs. It's an area where the sheer complexity and number of formats was overwhelming, but now a GeoTiff parser can be built in a day or two.
The article was making a bit of a sweeping statement based on a single datapoint: they didn't need Retool anymore.
Anything no-code or low code has a data model, and an agent can manipulate it in ways that are compatible with the system design. Letting an agent loose on a problem, without a good pilot, just leads to poor designs.
Agreed, the low code dream has been around a long time. Douglas Copeland, the novelist who named Generation X, wrote a book called Microserfs in 1995 about a startup building a low code platform.
> There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
A lot of value indeed, but not just for less technical people. Imagine ddd vs gdb. Usually some kind of visual debugging aid isn’t available in an environment because the ROI isn’t there, not because technical people love mental parsing or hate graphics. The LLM revolution is changing the calculus here: creating new tools and new visualizations is easier than ever. It would be unthinkable three years ago to create a visual debugging aid just to use it once, outside of truly gnarly and show-stopping bugs; now it could very well be feasible.
Does anyone actually believe this is the case? I use LLMs to ‘write’ code every day, but it’s not the case for me; my job is just as difficult and other duties expand to fill the space left by Claude. Am I just bad at using the tools? Or stupid? Probably both but c’est la vie.
I feel I spend easily 3-5 times more on "QA" with LLM vibe coding than doing myself, the only difference, I couldn't code what I am currently making without LLM, the breath of knowledge required is just too vast.
It would probably have been more accurate to say "the cost of writing code" -- and you're totally right about the rise of other duties (and technologies) that expand to fill that gap.
As a dev team, we've been exploring how we grapple with the cultural and workflow changes that arise as these tools improve--it's definitely an ongoing and constantly evolving conversation.
Same here. I use Claude Code everyday, very useful, but nowhere near to where I don't have to jump in and fix very simple stuff. I actually have a bug in an app that I don't fix because I use it as a test for LLM's and so far not one could solve it, it's a CSS bug!
I think the answer is that by the time AI can replace every function you do, it's also replaced everyone else and the world will either already have or will need to change radically.
I personally hope that the future becomes a UBI consumer-as-a-job thing, minus too much of the destructive impact that current consumerism has on the world.
Of course it's the case. However, "shipping code" isn't valuable and never has been. Shipping the right code that actually works and actually solves a problem is what is valuable.
It's those who are shipping easily who are stupid. And what I mean by that is you can just ask the LLM to use the browser to get API keys and then use them to deploy. That's how the cost of shipping is zero. A hefty amount of YOLO code on top of YOLO deploy. I mean, you could also have the LLM build you a CI CD pipeline, but that's not YOLO.
This conclusion is completely off the mark. Author seems to lack a critical piece of understanding of software development and operations. The case against no-code might make sense (UI being a hurdle for AI use), but does not apply to low-code.
Low-code has become especially important now with LLMs for several reasons, especially in terms of stability, maintainability, security and scalability.
If the same feature can be implemented with less code, the stability of the software improves significantly. LLMs work much better with solid abstractions; they are not great at coding the whole thing from scratch.
More code per feature costs more in terms of token count, is more error-prone, takes more time to generate, is less scalable, more brittle, harder to maintain, harder to audit... These are major negatives to avoid when working with LLMs... So I don't understand how author reached the conclusion that they reached.
Please note that we had two independent working low code systems, back in the 1990s.
Back then, a domain expert could fire up either Delphi or Visual Basic 6, and build a program that was useful for them. If they needed more performance, they would hire a professional programmer who used their work as a specification, and sanded off the rough edges.
These days, Lazarus is the open source follow on to Delphi. It'll work almost anywhere. I've run it on a Raspberry Pi Zero W! The only downside is the horrible documentation.
Microsoft went off the rails with their push towards .NET, sadly.
It has been a very long time since I have written any Delphi, but I prefer dotnet these days. The only (very real) problem is trying to predict what the best long term UI option is for native apps.
Dumb question: whats the difference between "low-code" and "libraries+frameworks"?
Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.
Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.
But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!
> Dumb question: whats the difference between "low-code" and "libraries+frameworks"?
There's not much technical difference.
The way those names are used, "low-code" is focused on inexperienced developers and prefers features like graphical code generators and ignoring errors. On the other hand, "frameworks" are focused on technical users and prefer features like api documentation and strict languages.
But again, there's nothing on the definition of those names that requires that focus. They are technically the same thing.
Agreed. Libraries and frameworks definitely adhere to a 'low-code' philosophy.
Your last idea makes sense as well to some extent. I think for sure, once you abstract away from the technical implementation details and use platforms which allow you to focus only on business logic, it becomes easier to move between different platforms which support similar underlying functionality. That said, some functionality may be challenging for different providers to replicate correctly... But some of the core constructs like authentication mechanisms, access controls, etc... Might be mostly interchangeable; we may end up with a few competing architectural patterns and different platforms will fit under one of the architectural patterns; which will be optimized for slightly different use cases.
Interesting article -- my take on low-code has always been less about how much initial development time the application takes to code, and more about how it can ease the long term maintenance of an application. With AI tooling it is going to be easy for companies to spin up hundreds of internal applications, but how are they accounting for the maintenance and support of those applications?
Think about the low-code platform as a place to host applications where many (not all) of the operational burdens long term maintenance are shifted to the platform so that developers don't have to spend as much time doing things like library upgrades, switching to X new framework because old framework is deprecated, etc..
Very correct! Why internal dashboards keep getting rebuild:
https://www.timestored.com/pulse/why-internal-dashboards-get...
It took me a few years to home in on the exact idea you've captured and I work in this exact area. There's a middle layer between UI team and notebook experiments that isn't worth companies building themselves.
Auth is a pretty classic case where it’s not hard to make your own account create/login form but it’s really hard to make a good one that does all the “right things”.
I am the founder of Dronahq - bootstrapped Low code platform. I fully concur with how AI is eating low code platforms for breakfast.
However underlying principles haven’t changed.
-Engineering bandwidth is minimally available for Internal tools.
-Enterprise controls/guardrails are important needs
- bringing in data to your app is a must have
- maintaining code vs low code apps — low code has been a lot easier
In a conversation with a CTO at VC fund - he predicts that 4-6 quarters and you shall see demand back to peak in low code segment!
In a customer conversation— customer made one tool with cursor and he was very successful but by the time he started adding features for 2.0 everything started breaking and he wanted to move back to lowcode.
As a low code vendor- we just added internal tool building agent that underneath writes react code and leverages the other core capability of the platform thereby giving users best of both the worlds.
But surely interesting times ahead for the category. Let’s see if it survives or dies!
My personal take— it will survive and converge with agentic ai!
I'm not so sure. IMO agents are actually a huge unlock for low code tools because before you had to teach a disinterested human how to use your new DSL/tool. But Agents are a lot more patient and enthusiastic. So you can have the agent generate the low-code instead of the human.
You could try to generate the business tools straight from the conventional toolsets but the problem is that agents are still far to unreliable for that. However, just like humans, if you dumb down the space and give them a smaller, simpler, set of primitives - they can do a lot better.
The idea that "Now that AI can churn out massive amounts of code quickly and for little cost, we should just forget trying to minimize the amount of code because code is now basically free." Is magical thinking which opposes what is actually happening.
The key insight that's missing is that code creation is the cheapest aspect of software development; reading the code, maintaining the code and adapting the code to new requirements is by far the most difficult and time-consuming part and the speed of code creation is irrelevant there. The smallest trade-off which compromises quality and future-proofing of the code is going to cost multiples the next time you (or the LLM) needs to look at it.
People with industry experience know very well what happened when companies hired developers based on their ability to churn out a large volume of code. Over time, these developers start churning out more and more code, at an accelerating rate; creating an illusion of productivity from the perspective of middle-managers, but the rate of actual new feature releases grinds to a halt as the bug rate increases.
With AI, it's going to be the same effect, except MUCH worse and MUCH more obvious. I actually think that it will get so bad that it will awaken people who weren't paying attention before.
I totally agree with this and with the comment above yours in regards to predictability. I don't understand this manufactured FUD the linked article or the low code spreadsheet product in another comment are creating here. It is literally the perfect match.
LLMs can assist you to write a shitload of useless bloatware, or can assist you to take something existing and complicated, and create something minimal that is almost as good. It's up to you.
In an ideal world, we come to a consensus on best practices for specification to feed into AI, especially random non-tech companies looking for internal LOB applications. On the other hand, that would us to care about documentation again...
Speaking as someone who spent 8 years building nocode tools, had two exits, and stepped out of the industry last year: I’m not bitter, and I’m not cheerleading either.
For apps—where “nocode” is basically an App template /API template builder it was always 50% useful, 50% marketing to sell you extra services. You still need an advanced builder mindset: people who think like engineers, but don’t want to write code. That’s a weird combo, and it’s really hard to find consistently.
For business logic, it’s almost the opposite. Nocode can give you a clean, visual UX—a clear map of how the logic is connected instead of a spaghetti mess in code. That value sticks around wherever “explain how this works” matters. Not everywhere, but definitely enough places for a real market.
a twist of that could be a hybrid that explains how it was built, has some quick controls, and not just typing prompt. e.g. NoCode agentic UI.
For our startup, the low-code vs LLM shift started hugely frustrating and scary, but also hopeful. After years of dev, we were getting ready to launch our low code app product #2, and then bam, chatgpt 3.5 happened and LLMs stopped sucking so much.
We had to look at our future for our corner of the world -- bringing our tricky gpu graph investigation tech into something that goes beyond the data 1%'ers at top gov/bank/tech/cyber investigation teams to something most teams can do -- and made the painful and expensive call to kill the low-code product.
The good news is, as a verticalized startup, the market still needed something here for the same reason we originally built it. LLMs just meant the writing was on the wall that that the market expectations would grow as would what's possible in general. We correctly guessed that would happen, and started building louie.ai . Ex: While we previously had already viewed our low-code platform as doubling a way for teams to write down their investigation flows so they can eventually do ML-powered multi-turn automations on them.. we never dreamed we'd be speed running investigation capture the flag competitions. Likewise, we're now years ahead of schedule on shedding the shackles of python-first notebooks & dashboards.
So yeah, for folks doing generic low-code productivity apps, it's not great. n8n and friends had to reinvent themselves as AI workflows, and there's still good reason to believe that as agent experiences improve, they'll get steamrolled anyways... but...
Verticalized low-code workflow tools get to do things that are hard for the claude codes. Today the coding envs are built better than most of the non-ai-native vertical teams, but the patterns are congealing and commoditizing. It'll be interesting as the ai side continues to commoditize , and the vertical teams get better at it - at which point the verticals get much more valuable again. (And indeed, we see OpenAI and friends hitting ceilings on generic applications and having to lean in to top verticals, at least for the b2b world.)
I can only speak for my work. We tried for a few months to use Retool and N8n but the calculus just wasn't there when I could spin up internal dashboard tools in less time with better functionality and less work. It was truly a win/win/win all around.
Things I built for internal use pretty quickly:
- Patient matcher
- Great UX crud for some tables
- Fax tool
- Referral tool
- Interactive suite of tools to use with Ashby API
I don't think these nocode tools have much of a future. Even using the nocode tool's version of "AI" was just the AI trying to finagle the nocode's featureset to get where I needed it to be. Failing most of the time.
Much easier to just have a Claude Code build it all out for real.
Fuck all this pointless noise, verbose analysis, LLMs and other associated crap.
Just someone give me MS Access for the web with an SSO module and let me drive it.
That'd cover 99% of LOB app needs and allow me to actually get shit done without tools that dissolve in my hands or require hordes of engineers to keep running or have to negotiate with a bullshit generator to puke out tens of thousands of lines of unmaintainable javascript crap.
We have achieved nothing in the last 25 years if we can't do that. Everyone who entered the industry since about 2005 appears to be completely braindead on how damn easy it was to get stuff actually done back then.
I'll admit to having joined the industry after 2005.
Can you say more about how easy it was to get stuff done back then? What was actually easier? Was Access just good and you didn't need to deal with building web apps?
Great to see your name here, Zack. I think the problem with low-code is that its a catch all that spans between primarily data-storage-and-use work (Airtable, quick base, Filemaker, etc), the primarily app-alternative platforms (retool, Mendix, etc, and the ETL tools.
To me, AI changes the inflection points of build vs buy a bit for app platforms, but not as much for the other two. Ultimately, AI becomes a huge consumer of the data coming from impromptu databases, and becomes useful when it connects to other platforms (I think this is why there is so much excitement around n8n, but also why Salesforce bought informatica).
Maybe low-code as a category dies, but just because it is easier for LLMs to produce working code, doesn't make me any more willing to set up a runtime, environment, or other details of actually getting that code to run. I think there's still a big opportunity to make running the code nice and easy, and that opportunity gets bigger if the barriers to writing code come down.
Great point, and I agree the catch-all nature of the category feels overly broad. At our company, we've felt this shift most clearly on the app-building side so far but I'm curious to see how the low-code data applications fare as context windows grow and the core LLM providers improve their collaboration tools, governance, and improve the UX of on demand app creation. And nice to see you, too!
Low code is not disappearing - it is simply changing.
What do you think an LLM is if not no/low-code?
And all the other components such as MCPs, skills, etc this is all low-code.
And who is going to plug all of these into a coherent system like Claude Code, Copilot, etc which is basically a low code interface. Sure it does not come with workflow-style designer but it does the same.
As far as the vibe-coded projects go, as someone who has personally made this mistake twice in my career and promised to never make it again, soon or later the OP will realise that software is a liability with and without LLMs. It is a security, privacy, maintenance and in general business burden and a risk that needs to be highlighted on every audit, and at every step.
When you start running the bills, all of these internal vibe-coded tools will run 10-20x the cost the original subscriptions that will be paid indirectly.
An LLM is not low code. It's something that generates the thing that does the thing.
Most of the time it generates 'high' code. That high code is something that looks like hieroglyphics to non developers.
If it generated low code then it's possible that non developers could have something that is comprehensible to them (at least down as far as the deterministic abstraction presented by the low code framework)
Low Code is just diagram->code in many respects. Many platforms have basically similar constructs.
I work on a 'low code' platform, not really, but we do a lot of EDI. This requires a bunch of very normal patterns and so we basically have a mini-DSL for mapping X12 and EDIFACT into other objects.
You guessed it, we have a diagram flow control tool.
It works, yes I can write it in Javascript too... but most of the 'flow control bits' are really inside of a small sandbox. Of course, we allow you to kick out to a sandbox and program if needed.
But for the most part, yeah I mean a good mini-DSL gets you 90% of the way there for us and we dont reach for programming to often.
So - its still useful to abstract some stuff.
Could AI write it by hand every time? yes... but you still would want all the bells and sidepieces.
Low-code tools for actual developers is dying but AI might be the thing that makes low-code take off for the broader market. Software development will look very different five years from now. It could be filled with knowledge workers with no CS education using no-code tools and AI while the hardcore engineers still build technology that they build on.
A strong advantage a platform like retool has in the non-developer market is they own a frictionless deployment channel. Your average non-developer isn't going to learn npm and bash, and then sign up for an account on AWS, when the alternative is pushing a button to deploy the creation the AI has built from your prompt.
I posted this prediction over a year ago in the Salesforce Reddit and it was an extremely unpopular take[0] (so much so that I don't know if I can post there anymore). Basic argument was that "low code" dsl is far less optimized and accessible for LLM and that there is billions being invested into general purpose code first tooling.
"While it’s possible low-code platforms will survive by providing non-technical users with the kind of magical experience that’s already possible for developers with AI coding tools today" - that magical experience is available to non-technical users today already. The last barrier is deployment / security / networking / maintenance, but I'm assuming here are a lot of startups working on that.
In a way, low-code has been the worst of both worlds: complex, locked-in, not scalable, expensive, with small ecosystems of support for self-learning.
(Context: worked at appsheet which got acquired by Google in 2020)
Existing tools already do a great job if you just want a magical looking prototype but they're not versatile enough for real production applications where those other aspects you mentioned actually matter (deployment, security, networking, maintenance, scalability, lock-in factor, costs...). Existing tools have focused on creating a 'magical' experience at the expense of all the critical stuff that needs to go under the bonnet.
There's a parallel with LLMs as well. You could build great prototypes with LLMs coding fully autonomously from start to finish... But if you want to build a real production system (beyond a certain low degree of complexity), currently, you NEED human involvement. The reason why you need human involvement is because there's just too much complexity, too much code to manage for a real production system. None of the existing low-code tools actually solve that problem of reducing complexity whilst maintaining production-readiness.
Heavily disagree (for serious low-code platforms). AI produces code that only AI maintains. That's a big burden. Low-code vendors ship code patterns that are easy to compose. They have the burden of maintaining that code.
Let AI build apps using these building blocks instead of wasting tokens reinventing the wheel on how interactive tables should be, which chart library to use, and how speaking from the frontend to the backend works securely.
LLMs will make creating low-code apps as easy as normal apps. But it has one constraint: how is the extensibility of the low-code framework?
Low-code tools have guardrails that keep things predictable - you know what a Retool app can and can't do. AI-generated internal tools are just... code. Code that will need updating when APIs change, when requirements shift, when the person who prompted it leaves.
"We migrated everything in a couple of sprints" is doing a lot of heavy lifting. Check back in 18 months when half those tools have drifted into unmaintained states and someone has to figure out what handleEdgeCase_v3_final_FIXED.js was supposed to do.
Well thats the problem right there, business logic belongs in the database, not JavaScript rats nests. I’d argue that Claude and even Coeldex are so good with SQL that it would be insane to use any other approach. And the database survives the hot framework of the week churn.
Its funny, i think of tools like v0.dev, bolt etc as low code tools too. The definition in my mind is, does the person building the website understand or are they exposed to the underlying tech.
I am the founder of one of the very first no-code/low-code platforms (DaDaBIK, first released in 2001). Three years ago I released appifytext.ai, an AI agent that develops DaDaBIK apps starting from textual specifications.
Low-code and LLMs can coexist: low-code can be just another layer (or, if you prefer, a more abstract programming language) that LLMs can use. You have less freedom, but more predictability and robustness, which is perfectly fine for internal tools.
A lot of negative responses so I'll provide my own personal corroborating anecdote. I am intending to replace my low-code solutions with AI-written code this year. I have two small internal CRUD apps using Budibase. It was a nice dream and I still really like Budibase. I just find it even easier yet to use AI to do it, with the resulting app built on standard components instead of an unusual one (Budibase itself). I'm a programmer so I can debug and fix that code.
> "For us, abandoning low-code to reclaim ownership of our internal tooling was a simple build vs buy decision with meaningful cost savings and velocity gains. It also feels like a massive upgrade in developer experience and end-user quality of life. It’s been about 6 months since we made this switch, and so far we haven’t looked back."
Fascinating but not surprising given some of the AI-for-software development changes of late.
Never really saw the appeal of low-code. We use ABP framework on the backend, and it take care of 80% of the boring work out of the box with battle tested codebase like multi-tenancy, user management, permissions, OIDC auth, auditing, background jobs, etc. With that handled, you mostly focus on core business logic. Combined with AI for speeding up, shipping a production ready system in months is very realistic.
This ABP framework sounds like a low-code tool to me. The ability to focus only on business logic is basically the entire premise of low-code. Your argument actually supports the opposite conclusion as the author of the article. You're suggesting that LLMs work better if they can only focus on the business logic without having to get tangled into the weeds of technical details.
I recently used the AI feature in n8n to write a code node to parse some data, which worked really well. Feels more like LLMs are enhancing low-code solutions.
Also, I see great value in not having to take care of the runtime itself. Sure, I can write a python script that does what I want much quicker and more effectively with claude code, but there is also a bunch of work to get it to run, restart, log, alert, auth…
Yeah, I'm someone who uses Codex, but I still use n8n and node-red for some automation stuff in my house.
n8n allows you to work at a higher level, and working at this level allows you to do things in a way that's more likely to be "correct". While it's not necessarily "difficult" to do integrations with Home Assistant or Discord or any of the million integrations that n8n has, it can still be error-prone, even for experienced developers.
With n8n, I'm pretty sure I could even have my parents set up and more importantly debug pipelines to control their thermostat or something. Even if I could get them to prompt Codex or Claude or something, I think it would be hard for them to debug the output if they had to.
Yes, If the low code solution has a good interface for LLMs. There are enough low code / no code solutions with a pure graphical interface and nothing else. Dinosaurs.
I always felt that the biggest problem with low code was the wall you hit when you tried to change something small. You had to fight the tool just to make the button look the way you wanted. AI gives you the speed of low code but allows you to build anything you can imagine. It makes sense to stop paying for tools that limit your freedom.
I have to smile a little at the graphic with the headline “LOW-CODE VALUE PROP.” Judging by the few low-code applications I've seen so far, the images should be arranged in exactly the opposite order.
I don't know if it falls under low code, TouchDevelop, I remember dumping 6 months into that and then abandoning it. I think I did learn programming concepts from it as it did have code.
Writing code is a tiny part of the problem. Operating the resulting system is also a huge value provided by low code companies. If LLMs have eliminated ops I haven't heard about it yet.
I think solutions like Plasmic, Refine, and React-Admin probably have a strong place in this AI future. They combine being able to work LLM-first while still offering agility by providing a solid foundation to build on. Otherwise you're stuck to the whims of untested AI slop for everything from security to component design. There's a reason people writing AI code still use libraries and it's the same reason we used libraries pre-AI. They're tested, they're stable, they have clear documentation. Just because the cost of code is zero doesn't mean the cost of software systems is zero.
davidpolberger|1 month ago
The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
zackliscio|1 month ago
Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.
mycall|1 month ago
[0] https://youtu.be/gLaMDOrDGHA?si=CIWVD-TLJrPju1RO
theshrike79|1 month ago
sergiotapia|1 month ago
spankalee|1 month ago
There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
The difficulty in the past has been 1) the amount of work it takes to build good direct manipulation tools - the level of detail you need to get to is overwhelming for most teams attempting it - but LLMs themselves make this a lot easier to build, and 2) what to do when users hit the inevitable gaps in your visual system. Now LLMs fill these gaps pretty spectacularly.
hecanjog|1 month ago
In my most hopeful of futures, we've figured out how to do lightweight inference, and if the models don't run locally at least they aren't harming the planet, and all this AI tooling hydrates all the automation projects of the last 40 years so that my favorite tiny local music label can have a super custom online shop that works exactly the way they need without having to sacrifice significant income to do it.
solomonb|1 month ago
willtemperley|1 month ago
Yes. In the GIS industry for example, nothing has fundamentally changed with the introduction of LLMs. They may make the same processes more efficient e.g. though automated building of workflows. AI has significantly improved classification work of course but it's still using the same principles (we've been doing ML longer than most industries). Geocoding will get cheaper and easier but it's still geocoding.
GIS software allowing standard visualisation, export and map production will bet a lot better because of LLMs. It's an area where the sheer complexity and number of formats was overwhelming, but now a GeoTiff parser can be built in a day or two.
The article was making a bit of a sweeping statement based on a single datapoint: they didn't need Retool anymore.
CodeCompost|1 month ago
mkoubaa|1 month ago
vannevar|1 month ago
kccqzy|1 month ago
A lot of value indeed, but not just for less technical people. Imagine ddd vs gdb. Usually some kind of visual debugging aid isn’t available in an environment because the ROI isn’t there, not because technical people love mental parsing or hate graphics. The LLM revolution is changing the calculus here: creating new tools and new visualizations is easier than ever. It would be unthinkable three years ago to create a visual debugging aid just to use it once, outside of truly gnarly and show-stopping bugs; now it could very well be feasible.
rahilb|1 month ago
Does anyone actually believe this is the case? I use LLMs to ‘write’ code every day, but it’s not the case for me; my job is just as difficult and other duties expand to fill the space left by Claude. Am I just bad at using the tools? Or stupid? Probably both but c’est la vie.
Foobar8568|1 month ago
oxidant|1 month ago
Writing code is the "easy" part and kind of always has been. No one triggers incidents from a PR that's been in review for too long.
zackliscio|1 month ago
As a dev team, we've been exploring how we grapple with the cultural and workflow changes that arise as these tools improve--it's definitely an ongoing and constantly evolving conversation.
nicohorn|1 month ago
antidamage|1 month ago
I personally hope that the future becomes a UBI consumer-as-a-job thing, minus too much of the destructive impact that current consumerism has on the world.
globular-toast|1 month ago
fragmede|1 month ago
socketcluster|1 month ago
Low-code has become especially important now with LLMs for several reasons, especially in terms of stability, maintainability, security and scalability.
If the same feature can be implemented with less code, the stability of the software improves significantly. LLMs work much better with solid abstractions; they are not great at coding the whole thing from scratch.
More code per feature costs more in terms of token count, is more error-prone, takes more time to generate, is less scalable, more brittle, harder to maintain, harder to audit... These are major negatives to avoid when working with LLMs... So I don't understand how author reached the conclusion that they reached.
mikewarot|1 month ago
Back then, a domain expert could fire up either Delphi or Visual Basic 6, and build a program that was useful for them. If they needed more performance, they would hire a professional programmer who used their work as a specification, and sanded off the rough edges.
These days, Lazarus is the open source follow on to Delphi. It'll work almost anywhere. I've run it on a Raspberry Pi Zero W! The only downside is the horrible documentation.
Microsoft went off the rails with their push towards .NET, sadly.
pjc50|1 month ago
evv|1 month ago
Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.
Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.
But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!
marcosdumay|1 month ago
There's not much technical difference.
The way those names are used, "low-code" is focused on inexperienced developers and prefers features like graphical code generators and ignoring errors. On the other hand, "frameworks" are focused on technical users and prefer features like api documentation and strict languages.
But again, there's nothing on the definition of those names that requires that focus. They are technically the same thing.
socketcluster|1 month ago
Your last idea makes sense as well to some extent. I think for sure, once you abstract away from the technical implementation details and use platforms which allow you to focus only on business logic, it becomes easier to move between different platforms which support similar underlying functionality. That said, some functionality may be challenging for different providers to replicate correctly... But some of the core constructs like authentication mechanisms, access controls, etc... Might be mostly interchangeable; we may end up with a few competing architectural patterns and different platforms will fit under one of the architectural patterns; which will be optimized for slightly different use cases.
sien|1 month ago
Libraries + Frameworks doesn't mean that unless you're bonkers.
LLMs + Libraries + Frameworks means you might pay to build the application, but running it is only going to be the cost of where it's running.
You're exactly right.
lioeters|1 month ago
therealmocker|1 month ago
Think about the low-code platform as a place to host applications where many (not all) of the operational burdens long term maintenance are shifted to the platform so that developers don't have to spend as much time doing things like library upgrades, switching to X new framework because old framework is deprecated, etc..
RyanHamilton|1 month ago
goalieca|1 month ago
kinj28|1 month ago
However underlying principles haven’t changed.
-Engineering bandwidth is minimally available for Internal tools. -Enterprise controls/guardrails are important needs - bringing in data to your app is a must have - maintaining code vs low code apps — low code has been a lot easier
In a conversation with a CTO at VC fund - he predicts that 4-6 quarters and you shall see demand back to peak in low code segment!
In a customer conversation— customer made one tool with cursor and he was very successful but by the time he started adding features for 2.0 everything started breaking and he wanted to move back to lowcode.
As a low code vendor- we just added internal tool building agent that underneath writes react code and leverages the other core capability of the platform thereby giving users best of both the worlds.
But surely interesting times ahead for the category. Let’s see if it survives or dies!
My personal take— it will survive and converge with agentic ai!
rriley|1 month ago
Low-Code and the Democratization of Programming: Rethinking Where Programming Is Headed
https://www.oreilly.com/radar/low-code-and-the-democratizati...
siliconc0w|1 month ago
You could try to generate the business tools straight from the conventional toolsets but the problem is that agents are still far to unreliable for that. However, just like humans, if you dumb down the space and give them a smaller, simpler, set of primitives - they can do a lot better.
socketcluster|1 month ago
The idea that "Now that AI can churn out massive amounts of code quickly and for little cost, we should just forget trying to minimize the amount of code because code is now basically free." Is magical thinking which opposes what is actually happening.
The key insight that's missing is that code creation is the cheapest aspect of software development; reading the code, maintaining the code and adapting the code to new requirements is by far the most difficult and time-consuming part and the speed of code creation is irrelevant there. The smallest trade-off which compromises quality and future-proofing of the code is going to cost multiples the next time you (or the LLM) needs to look at it.
People with industry experience know very well what happened when companies hired developers based on their ability to churn out a large volume of code. Over time, these developers start churning out more and more code, at an accelerating rate; creating an illusion of productivity from the perspective of middle-managers, but the rate of actual new feature releases grinds to a halt as the bug rate increases.
With AI, it's going to be the same effect, except MUCH worse and MUCH more obvious. I actually think that it will get so bad that it will awaken people who weren't paying attention before.
lloydatkinson|1 month ago
antirez|1 month ago
tptacek|1 month ago
LeSaucy|1 month ago
nxobject|1 month ago
himeexcelanta|1 month ago
maxdo|1 month ago
Speaking as someone who spent 8 years building nocode tools, had two exits, and stepped out of the industry last year: I’m not bitter, and I’m not cheerleading either.
For apps—where “nocode” is basically an App template /API template builder it was always 50% useful, 50% marketing to sell you extra services. You still need an advanced builder mindset: people who think like engineers, but don’t want to write code. That’s a weird combo, and it’s really hard to find consistently.
For business logic, it’s almost the opposite. Nocode can give you a clean, visual UX—a clear map of how the logic is connected instead of a spaghetti mess in code. That value sticks around wherever “explain how this works” matters. Not everywhere, but definitely enough places for a real market.
a twist of that could be a hybrid that explains how it was built, has some quick controls, and not just typing prompt. e.g. NoCode agentic UI.
lmeyerov|1 month ago
For our startup, the low-code vs LLM shift started hugely frustrating and scary, but also hopeful. After years of dev, we were getting ready to launch our low code app product #2, and then bam, chatgpt 3.5 happened and LLMs stopped sucking so much.
We had to look at our future for our corner of the world -- bringing our tricky gpu graph investigation tech into something that goes beyond the data 1%'ers at top gov/bank/tech/cyber investigation teams to something most teams can do -- and made the painful and expensive call to kill the low-code product.
The good news is, as a verticalized startup, the market still needed something here for the same reason we originally built it. LLMs just meant the writing was on the wall that that the market expectations would grow as would what's possible in general. We correctly guessed that would happen, and started building louie.ai . Ex: While we previously had already viewed our low-code platform as doubling a way for teams to write down their investigation flows so they can eventually do ML-powered multi-turn automations on them.. we never dreamed we'd be speed running investigation capture the flag competitions. Likewise, we're now years ahead of schedule on shedding the shackles of python-first notebooks & dashboards.
So yeah, for folks doing generic low-code productivity apps, it's not great. n8n and friends had to reinvent themselves as AI workflows, and there's still good reason to believe that as agent experiences improve, they'll get steamrolled anyways... but...
Verticalized low-code workflow tools get to do things that are hard for the claude codes. Today the coding envs are built better than most of the non-ai-native vertical teams, but the patterns are congealing and commoditizing. It'll be interesting as the ai side continues to commoditize , and the vertical teams get better at it - at which point the verticals get much more valuable again. (And indeed, we see OpenAI and friends hitting ceilings on generic applications and having to lean in to top verticals, at least for the b2b world.)
sergiotapia|1 month ago
Things I built for internal use pretty quickly:
I don't think these nocode tools have much of a future. Even using the nocode tool's version of "AI" was just the AI trying to finagle the nocode's featureset to get where I needed it to be. Failing most of the time.Much easier to just have a Claude Code build it all out for real.
dgxyz|1 month ago
Just someone give me MS Access for the web with an SSO module and let me drive it.
That'd cover 99% of LOB app needs and allow me to actually get shit done without tools that dissolve in my hands or require hordes of engineers to keep running or have to negotiate with a bullshit generator to puke out tens of thousands of lines of unmaintainable javascript crap.
We have achieved nothing in the last 25 years if we can't do that. Everyone who entered the industry since about 2005 appears to be completely braindead on how damn easy it was to get stuff actually done back then.
sien|1 month ago
The Apache foundation or someone ought to target that as a proper Open Source setup.
normanvalentine|1 month ago
Can you say more about how easy it was to get stuff done back then? What was actually easier? Was Access just good and you didn't need to deal with building web apps?
NetMageSCW|1 month ago
pphysch|1 month ago
This is essentially Rails and Django and so on
bronco21016|1 month ago
3acctforcom|1 month ago
Embrace Oracle Apex.
fock|1 month ago
abakker|1 month ago
To me, AI changes the inflection points of build vs buy a bit for app platforms, but not as much for the other two. Ultimately, AI becomes a huge consumer of the data coming from impromptu databases, and becomes useful when it connects to other platforms (I think this is why there is so much excitement around n8n, but also why Salesforce bought informatica).
Maybe low-code as a category dies, but just because it is easier for LLMs to produce working code, doesn't make me any more willing to set up a runtime, environment, or other details of actually getting that code to run. I think there's still a big opportunity to make running the code nice and easy, and that opportunity gets bigger if the barriers to writing code come down.
zackliscio|1 month ago
_pdp_|1 month ago
What do you think an LLM is if not no/low-code?
And all the other components such as MCPs, skills, etc this is all low-code.
And who is going to plug all of these into a coherent system like Claude Code, Copilot, etc which is basically a low code interface. Sure it does not come with workflow-style designer but it does the same.
As far as the vibe-coded projects go, as someone who has personally made this mistake twice in my career and promised to never make it again, soon or later the OP will realise that software is a liability with and without LLMs. It is a security, privacy, maintenance and in general business burden and a risk that needs to be highlighted on every audit, and at every step.
When you start running the bills, all of these internal vibe-coded tools will run 10-20x the cost the original subscriptions that will be paid indirectly.
discreteevent|1 month ago
An LLM is not low code. It's something that generates the thing that does the thing.
Most of the time it generates 'high' code. That high code is something that looks like hieroglyphics to non developers.
If it generated low code then it's possible that non developers could have something that is comprehensible to them (at least down as far as the deterministic abstraction presented by the low code framework)
calvinmorrison|1 month ago
I work on a 'low code' platform, not really, but we do a lot of EDI. This requires a bunch of very normal patterns and so we basically have a mini-DSL for mapping X12 and EDIFACT into other objects.
You guessed it, we have a diagram flow control tool.
It works, yes I can write it in Javascript too... but most of the 'flow control bits' are really inside of a small sandbox. Of course, we allow you to kick out to a sandbox and program if needed.
But for the most part, yeah I mean a good mini-DSL gets you 90% of the way there for us and we dont reach for programming to often.
So - its still useful to abstract some stuff.
Could AI write it by hand every time? yes... but you still would want all the bells and sidepieces.
ryanackley|1 month ago
A strong advantage a platform like retool has in the non-developer market is they own a frictionless deployment channel. Your average non-developer isn't going to learn npm and bash, and then sign up for an account on AWS, when the alternative is pushing a button to deploy the creation the AI has built from your prompt.
subpixel|1 month ago
In my company I feel like the last to this party,
cjonas|1 month ago
https://www.reddit.com/r/salesforce/comments/1hxxdls/unpopul...
kamselig|1 month ago
In a way, low-code has been the worst of both worlds: complex, locked-in, not scalable, expensive, with small ecosystems of support for self-learning.
(Context: worked at appsheet which got acquired by Google in 2020)
socketcluster|1 month ago
Existing tools already do a great job if you just want a magical looking prototype but they're not versatile enough for real production applications where those other aspects you mentioned actually matter (deployment, security, networking, maintenance, scalability, lock-in factor, costs...). Existing tools have focused on creating a 'magical' experience at the expense of all the critical stuff that needs to go under the bonnet.
There's a parallel with LLMs as well. You could build great prototypes with LLMs coding fully autonomously from start to finish... But if you want to build a real production system (beyond a certain low degree of complexity), currently, you NEED human involvement. The reason why you need human involvement is because there's just too much complexity, too much code to manage for a real production system. None of the existing low-code tools actually solve that problem of reducing complexity whilst maintaining production-readiness.
phartenfeller|1 month ago
Let AI build apps using these building blocks instead of wasting tokens reinventing the wheel on how interactive tables should be, which chart library to use, and how speaking from the frontend to the backend works securely.
LLMs will make creating low-code apps as easy as normal apps. But it has one constraint: how is the extensibility of the low-code framework?
agent013|1 month ago
cpursley|1 month ago
jeffybefffy519|1 month ago
eugeniox|1 month ago
Low-code and LLMs can coexist: low-code can be just another layer (or, if you prefer, a more abstract programming language) that LLMs can use. You have less freedom, but more predictability and robustness, which is perfectly fine for internal tools.
electroly|1 month ago
banku_brougham|1 month ago
Is this a commonly held assumption?
marcosdumay|1 month ago
I can get assembly from /dev/urandom for cents on the TB.
nadis|1 month ago
Fascinating but not surprising given some of the AI-for-software development changes of late.
sreekanth850|1 month ago
socketcluster|1 month ago
CountVonGuetzli|1 month ago
Also, I see great value in not having to take care of the runtime itself. Sure, I can write a python script that does what I want much quicker and more effectively with claude code, but there is also a bunch of work to get it to run, restart, log, alert, auth…
theLiminator|1 month ago
tombert|1 month ago
n8n allows you to work at a higher level, and working at this level allows you to do things in a way that's more likely to be "correct". While it's not necessarily "difficult" to do integrations with Home Assistant or Discord or any of the million integrations that n8n has, it can still be error-prone, even for experienced developers.
With n8n, I'm pretty sure I could even have my parents set up and more importantly debug pipelines to control their thermostat or something. Even if I could get them to prompt Codex or Claude or something, I think it would be hard for them to debug the output if they had to.
nicewood|1 month ago
calvinmorrison|1 month ago
who needs SaaS
dfajgljsldkjag|1 month ago
igogq425|1 month ago
ge96|1 month ago
ares623|1 month ago
padjo|1 month ago
unknown|1 month ago
[deleted]
hahahahhaah|1 month ago
But if I can get my AI to use an off the shelf open source flow orchestrator rather than manual coding api calls that is better.
gman2093|1 month ago
tasuki|1 month ago
pjmlp|1 month ago
stuaxo|1 month ago
fullstackchris|1 month ago
odie5533|1 month ago
smnplk|1 month ago
Reallly ?
toomuchtodo|1 month ago
zackliscio|1 month ago