top | item 46866481

Coding assistants are solving the wrong problem

193 points| jinhkuan | 27 days ago |bicameral-ai.com

147 comments

order

micw|27 days ago

For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.

mirsadm|27 days ago

I use Claude Code a lot but one thing that really made me concerned was when I asked it about some ideas I have had which I am very familiar with. It's response was to constantly steer me away from what I wanted to do towards something else which was fine but a mediocre way to do things. It made me question how many times I've let it go off and do stuff without checking it thoroughly.

bonoboTP|27 days ago

Yes but in my experience this sometimes works great, other times you paint yourself in a corner and the sun total is that you still have to learn the thing, just the initial ram is less steep. For example I build my self a nice pipeline for converting jpegs on disk to h264 on disk via zero-copy nvjpeg to nvenc, with python bindings but have been pulling out my hair over bframe ordering and weird delays in playback etc. Nothing u solvable but I had to learn a great deal and when we were in the weeds, Opus was suggesting stupid hack quick fixes that made a whack a mole with the tests. In the end I had to lead e Pugh and read enough to be able to ask it with the right vocabulary to make it work. Similarly with entering many novel areas. Initially I get a rush because it "just works" but it really only works for the median case initially and it's up to you to even know what to test. And AIs can be quite dismissive of edge cases like saying this will not happen in most cases so we can skip it etc.

bandrami|27 days ago

Huh. I'm extremely skeptical of AI in areas where I don't have expertise, because in areas where I do have expertise I see how much it gets wrong. So it's fine for me to use it in those areas because I can catch the errors, but I can't catch errors in fields I don't have any domain expertise in.

joshbee|27 days ago

I'm in the same boat. I've been taking on much more ambitious projects both at work and personally by collaborating with LLMs. There are many tasks that I know I could do myself but would require a ton of trial and error.

I've found giving the LLMs the input and output interfaces really help keep them on rails, while still being involved in the overall process without just blindly "vibe coding."

Having the AI also help with unit tests around business logic has been super helpful in addition to manual testing like normal. It feels like our overall velocity and code quality has been going up regardless of what some of these articles are saying.

netdevphoenix|27 days ago

> Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI

There are some things here that folks making statements like yours often omit and it makes me very sus about your (over)confidence. Mostly these statements talk in a business short-term results oriented mode without mentioning any introspective gains (see empirically supported understanding) or long-term gains (do you feel confident now in making further changes _without_ the AI now that you have gained new knowledge?).

1. Are you 100% sure your code changes didn't introduce unexpected bugs?

1a. If they did, would you be able to tell if they where behaviour bugs (ie. no crashing or exceptions thrown) without the AI?

2. Did you understand why the bug was happening without the AI giving you an explanation?

2a. If you didn't, did you empirically test the AI's explanation before applying the code change?

3. Has fixing the bug improved your understanding of the driver behaviour beyond what the AI told you?

3a. Have you independently verified your gained understanding or did you assume that your new views on its behaviour are axiomatically true?

Ultimately, there are 2 things here: one is understanding the code change (why it is needed, why that particular change implementation is better relative to others, what future improvements could be made to that change implementation in the future) and skill (has this experience boosted your OWN ability in this particular area? in other words, could you make further changes WITHOUT using the AI?).

This reminds me of people that get high and believe they have discovered these amazing truths. Because they FEEL it not because they have actual evidence. When asked to write down these amazing truths while high, all you get in the notes are meaningless words. While these assistants are more amenable to get empirically tested, I don't believe most of the AI hypers (including you in that category) are actually approaching this with the rigour that it entails. It is likely why people often think that none of you (people writing software for a living) are experienced in or qualified to understand and apply scientific principles to build software.

Arguably, AI hypers should lead with data not with anecdotal evidence. For all the grandiose claims, the lack of empirical data obtained under controlled conditions on this particular matter is conspicuous by its absence.

ivell|27 days ago

In my case I built a video editing tool fully customized for a community of which I am a member. I could do it in a few hours. I wouldn't have even started this project as I don't have much free time, though I have been coding for 25+ years.

I see it empowering to build custom tooling which need not be a high quality maintenance project.

trcf23|27 days ago

Also most of the studies shown start to be obsolete with AI rapid path of improvements. Opus 4.5 has been a huge game changer for me (combined with CC that I had not used before) since December. Claude code arrived this summer if I’m not mistaken.

So I’m not sure a study from 2024 or impact on code produced during 2024 2025 can be used to judge current ai coding possibilities.

varjag|27 days ago

I think what we'll see as AI companies collect more usage data the requirements for knowing what you do will sink lower and lower. Whatever advantage we have now is transient.

viraptor|27 days ago

> But you still need to know how to do things properly in general, otherwise the results are bad.

Even that could use some nuance. I'm generating presentations in interactive JS. If they work, they work - that's the result, and I extremely don't care about the details for this use case. Nobody needs to maintain them, nobody cares about the source. There's no need for "properly" in this case.

kilninvar|27 days ago

I've found this is exact opposite of what I'd dare do with AI, things you don't understand are things you can't verify. Consider you want a windowed pane for your cool project, so you ask an AI to draft a design. It looks cool and it works! Until you bring it outside where after 30 minutes it turns into explosive shrapnel, because the model didn't understand thermal expansion, nor did you.

Contrast this to something you do know but can't be arsed to make; you can keep re-rolling a design until you get something you know and can confirm works. Perfect, time saved.

Quothling|27 days ago

I think AI will fail in any organisation where the business process problems are sometimes discuvered during engineering. I use AI quite a lot, I recently had Claude upgrade one of our old services from hubspot api v1 to v3 without basically any human interaction beyond the code review. I had to ask it for two changes I think, but over all I barely got out of my regular work to get it done. I did know exactly what to ask of it because the IT business partners who had discovered the flaw had basically written the tasks already. Anyway. AI worked well there.

Where AI fails us is when we build new software to improve the business related to solar energy production and sale. It fails us because the tasks are never really well defined. Or even if they are, sometimes developers or engineers come up with a better way to do the business process than what was planned for. AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to do X first. If we only did code-reviews then we would miss that step.

In a perfect organisation your BPM people would do this. In the world I live in there are virtually no BPM people, and those who know the processes are too busy to really deal with improving them. Hell... sometimes their processes are changed and they don't realize until their results are measurably better than they used to be. So I think it depends a lot on the situation. If you've got people breaking up processes, improving them and then decribing each little bit in decent detail. Then I think AI will work fine, otherwise it's probably not the best place to go full vibe.

bonesss|27 days ago

> AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to…

LLMs combine two dangerous traits simultaneously: they are non-critical about suboptimal approaches and they assist unquestioningly. In practice that means doing dumb things a lazy human would refuse because they know better, and then following those rabbit holes until they run out of imaginary dirt.

My estimation is that that combination undermines their productivity potential without very structured application. Considering the excess and escalating costs of dealing with issues as they arise further from the developers work station (by factors of approximately 20x, 50x, and 200x+ as you get out through QA and into customer environments (IIRC)), you don’t need many screw ups to make the effort net negative.

Onavo|27 days ago

> business process problems are sometimes discovered (sic.) during engineering

This deserves a blog post all on its own. OP you should write one and submit it. It's a good counterweight to all the AI optimistic/pessimistic extremism.

ivell|27 days ago

One benefit of AI could be to build quick prototypes to discover what processes are needed for users to try out different approaches before committing to a full high quality project.

viraptor|27 days ago

> but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to do X first

Then don't ask it to write code? If you ask any recent high quality model to discuss options, tradeoffs, design constraints, refine specs it will do it for you until you're sick and tired of it finding real edge cases and alternatives. Ask for just code and you'll get just code.

bambax|27 days ago

> Unlike their human counterparts who would and escalate a requirements gap to product when necessary, coding assistants are notorious for burying those requirement gaps within hundreds of lines of code

This is the kind of argument that seems true on the surface, but isn't really. An LLM will do what you ask it to do! If you tell it to ask questions and poke holes into your requirements and not jump to code, it will do exactly that, and usually better than a human.

If you then ask it to refactor some code, identify redundancies, put this or that functionality into a reuseable library, it will also do that.

Those critiques of coding assistants are really critiques of "pure vibe coders" who don't know anything and just try to output yet another useless PDF parsing library before they move on to other things.

voiper1|27 days ago

I hear your pushback, but that I think that's his point:

Even seasoned coders using plan mode are funneled towards "get the code out" when experience shows that the final code is a tiny part of the overall picture.

The entire experience should be reorganized that the code is almost the afterthought, and the requirements, specs, edge cases, tests, etc are the primary part.

grey-area|27 days ago

It will not in fact always do what you ask it because it lacks any understanding, though the chat interface and prolix nature of LLMs does a good job at hiding that.

sothatsit|27 days ago

It’s like in Anthropic’s own experiment. People who used AI to do their work for them did worse than the control group. But people who used AI to help them understand the problem, brainstorm ideas, and work on their solution did better.

The way you approach using AI matters a lot, and it is a skill that can be learned.

falloutx|27 days ago

It not just about asking questions, its about asking right questions. Can AI pushback and decline a completely stupid request? PMs & Business people dont really know the limitation of the software and almost always think adding more features is better. With AI you will be shipping 90% of the features which were never needed thus adding to bloat & making the product go off the rails quicker.

robertlagrant|27 days ago

> There’s a name for misalignment between business intent and codebase implementation: technical debt.

I wish we'd stop redefining this term. Technical debt is a shortcut agreed upon with the business to get something out now and fix later, and the fix will cost more than the original. It is entirely in line with business intent.

mpalmer|27 days ago

Exactly. The quote is a great definition of a bug, not debt

williamcotton|27 days ago

Software is not a liability, it's an asset. If you make it for less then it has a shorter shelf-life. Tech debt is a nonsense term to begin with.

Arch-TK|27 days ago

"Experienced developers were 19% slower when using AI coding assistants—yet believed they were faster (METR, 2025)"

Anecdotally I see this _all the time_...

bonesss|27 days ago

Talking and typing feels far more productive that staring and thinking, and there is a cumulative effect of those breaks to check Reddit while something is generating.

Humans are notoriously bad at estimating time use with different subjective experiences and show excessive weighting of the tail ends of experiences and perceived repetitious tasks. Making something psychologically more comforting and active, particularly if you can activate speech, will distort people’s sense of time meaningfully.

The current hype around LLMs is making me think about misapplied ORMs in medium scale projects... The tool is chosen early to save hours of boring typing and a certain kind of boring maintenance, but deep into the project what do we see? Over and over days are spontaneously being lost to incidental complexity and arbitrary tool constraints. And with the schedule slipping it’s too much work to address the root issue so band-aides get put on band-aides, and we start seeing weeks slip down the drain.

Subjective time accounting and excessive aversion to specific conceptual tasks creates premature optimizations whose effects become omnipresent over time. All the devs in the room agreed they want to avoid some work day 1, but the accounting shows a big time commitment resulting from that immediate desire. Feelings aren’t stopwatches.

[Not hating on ORMs, just misusing tools for weeks to save a couple hours - every day ain’t Saturday - right tool for the job.]

fix4fun|27 days ago

Yes, that's true, because as developer you have to check if "generated" code meet your standards and if is handling all edge cases you see.

When you are an experienced developer and you "struggle" writing manually some code this is important warning indicator about project architecture - that something is wrong in it.

For such cases I like to step back and think about redesign/refactor. When coding goes smoothly, some "unpredicted" customer changes can be added easly into project then it is the best indicator that architecture is fine.

That's my humble human opinion ;)

faeyanpiraat|27 days ago

This is actually amazing, isn't it? we are just 21% away from becoming faster then?

Also I don't even care about speed, since I've managed to get soooo much work done which I would not even have wanted to start working on manually.

jpalomaki|27 days ago

The article they are referring to is 404, but based on the URL was published bit more than year ago. That's quite long time in a field that is evolving so rapidly and which even the pioneers are still figuring out.

techblueberry|26 days ago

One of the things I’ve noticed talking to Claude, is that like, one of the reasons Claude seems so genius is because of it’s ability to keep right up with me, to talk about things I want to talk about that other people may not and follow me down rabbit holes.

And having a person that keeps right up with you makes it feel like they’re very intelligent, because of course they are, they seem like scarily as intelligent as you. Because they’re right next to you, maybe even a little ahead! (I think Travis Kalanick was experiencing this when he was talking about Vibe Physics.)

But the thing is, it was ultimately an extension of your ideas, without your prompts, the ideas don’t exist. It’s very library of babel esque.

And so I wonder if coding assistants have this general problem. If you’re a good developer following good practices, prompting informatively, it’s right next to you.

If you’re not so good and tend to not be able to express yourself clearly or develop solutions that are simple, it’s right there with you.

rcarmo|27 days ago

I think that the premise is wrong (and the title is very clickbaity, but we will ignore that it doesn’t really match the article and the “conclusion”): coding agents are “solving” at least one problem, which is to massively expand the impact of senior developers _that can use them effectively_.

Everything else is just hype and people “holding it wrong”.

iLoveOncall|27 days ago

I really wonder how you people manage to ignore the many research studies that have come out and prove this wrong.

monero-xmr|27 days ago

First you must accept that engineering elegance != market value. Only certain applications and business models need the crème de le crème of engineers.

LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.

adithyassekhar|27 days ago

It's not just about elegance.

I'm going to give an example of a software with multiple processes.

Humans can imagine scenarios where a process can break. Claude can also do it, but only when the breakage happens from inside the process and if you specify it. It can not identify future issues from a separate process unless you specifically describe that external process, the fact that it could interact with our original process and the ways in which it can interact.

Identifying these are the skills of a developer, you could say you can document all these cases and let the agent do the coding. But here's the kicker, you only get to know these issues once you started coding them by hand. You go through the variables and function calls and suddenly remember a process elsewhere changes or depends on these values.

Unit tests could catch them in a decently architected system, but those tests needs to be defined by the one coding it. Also if the architect himself is using AI, because why not, it's doomed from the start.

WD-42|27 days ago

I keep hearing this but I don’t understand. If inelegant code means more bugs that are harder to fix later, that translates into negative business value. You won’t see it right away which is probably where this sentiment is coming from, but it will absolutely catch up to you.

Elegant code isn’t just for looks. It’s code that can still adapt weeks, months, years after it has shipped and created “business value”.

slau|27 days ago

OT: I applaud your correct use of the grave accent, however minor nitpick: crème in French is feminine, therefore it would be “la”.

Madmallard|27 days ago

Based on my experience using Claude opus 4.5, it doesn't really even get functionality correct. It'll get scaffolding stuff right if you tell it exactly what you want but as soon as you tell it to do testing and features it ranges from mediocre to worse than useless.

pmontra|27 days ago

Well, it takes time to assess and adapt, and large organizations need more time than smaller ones. We will see.

In my experience the limiting factor is doing the right choices. I've got a costumer with the usual backlog of features. There are some very important issues in the backlog that stay in the backlog and are never picked for a sprint. We're doing small bug fixes, but the big ones. We're doing new features that are in part useless because of the outstanding bugs that prevent customers from fully using them. AI can make us code faster but nobody is using it to sort issues for importance.

aurareturn|27 days ago

  LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.
The talent isn't used for writing code anymore though. They're used for directing, which an LLM isn't very good at since it has limited real world experience, interacting with other humans, and goals.

OpenAI has said they're slowing down hiring drastically because their models are making them that much more productive. Codex itself is being built by Codex. Same with Claude Code.

OsamaJaber|27 days ago

The requirements gap point is underrated. AI guesses where a human would ask By the time you catch it in review, you've already wasted the time you saved -_-

another_twist|26 days ago

Silent naive compliance. AI will try to follow the prompt almost to a fault and because AI doesnt talk back, its upto the operator / engineer to ensure that requirements are well scoped. Product meetings often have a form of hand-waviness to details. The little assumptions which are out of place are what slow down a project the most. The issue is that with AI it is easy to quickly go far down the wrong road. I think thats the reason for the slow down that people experience.

Ironically, the very agents designed to replace engineers are now making engineers more important. These requirement collection skills can and should be folded into the existing craft of software engineering.

foxes|27 days ago

So basically - "ai" - actually llms - are decent at what they are trained at - producing plausible text with a bunch of structure and constraints - and a lot of programming, boring work emails, reddit/hn comments, etc can fall into that. It still requires understanding to know when that diverges from something useful, it still is just plausible text, not some magic higher reasoning.

Are they something worth using up vast amounts of power and restructuring all of civilisation around? No

Are they worth giving more power to megacorps over? No

Its like tech doesn't understand consent and then partially the classic case of "disrupting x" - thinking that you know how to solve something in maths, cs, physics and then suddenly that means you can solve stuff in a completely different field.

llms are over indexed.

helloplanets|27 days ago

The writeup is a bit contrived in my opinion. And sort of misrepresenting what users can do with tools like Claude Code.

Most coding assistant tools are flexible to applying these kinds of workflows, and these sorts of workflows are even brought up in Anthropic's own examples on how to use Claude Code. Any experienced dev knows that the act of specifically writing code is a small part of creating a working program.

znnajdla|26 days ago

It is entirely possible for someone to be 2x faster at coding with AI without increasing their throughput. I believe Claude Code has made me at least 2x as productive. But I can easily see why a 2x increase in individual development speed may not translate to any increase overall at the level of the whole organization. Because at most organizations the bottleneck is not code, it’s everything else from politics to external blockers to bike shedding and people’s egos. So a developer who is 2x faster at their work may just end up having more free time on their hands. The greatest increases in productivity and throughput are where there are no external blockers, like random side projects, which is exactly where people report the greatest productivity with AI.

zmmmmm|27 days ago

this concept of bottlenecking on code review is definitely a problem.

Either you (a) don't review the code, (b) invest more resources in review or (c) hope that AI assistance in the review process increases efficiency there enough to keep up with code production.

But if none of those work, all AI assistance does is bottleneck the process at review.

ozlikethewizard|27 days ago

Also the thought of my job becoming more code review than anything else is enough to turn me into a carpenter.

ares623|27 days ago

If companies truly believed more code equals more productivity then they will remove all code review from their process and let IC’s ship AI generated code that they “review” as the prompter directly to prod.

b1temy|25 days ago

> most tech debt isn’t actually created in the code, it’s created in product meetings. Deadlines. Scope cuts.

> When asked what would help most, two themes dominated

> Reducing ambiguity upstream so engineers aren’t blocked...

I do wonder how much LLMs would help here, this seems to me at least, to be a uniquely human problem. Humans (Managers, leads, owners, what have you) are the ones who interpret requirements, decide deadlines, features and scope cuts and are the ones liable for it.

What could an LLM do to reduce ambiguity upstream? If it was trained with information on requirements, this same information could be documented somewhere for engineers to refer to. If it were to hallucinate or "guess" an answer without talking to a person for clarification, and which might turn out to not be correct, who would be responsible for it? imo, the bureaucracy and waiting for clarification mid-implementation is a necessary evil. Clever engineers, through experience, might try implement things in an open way that can be easily changed for future changes they predict might happen.

As for the second point,

> A clearer picture of affected services and edge cases

> three categories stood out: state machine gaps (unhandled states caused by user interaction sequences), data flow gaps, and downstream service impacts.

I'd agree. Perhaps when a system is complex enough, and a developer is laser focused on a single component of it, it is easy to miss gaps when other parts of the system are used in conjunction with it. I remember a while ago, it used to be a popular take that LLMs were a useful tool for generating unit tests, because of their usual repetitive nature and because LLMs were usually good at finding edge cases to test, some of which a developer might have missed.

---

I will say, it is refreshing to see a take on coding assistants being used on other aspects instead of just writing code, which as the article pointed out, came with its own set of problems (increase Inefficiencies in other parts of the development lifecycle, potential AI-introduced security vulnerabilities, etc.)

fpoling|27 days ago

I have found that using Cursor to write in Rust what I previously would write as a shell or Python or jq script was rather helpful.

The datasets are big and having the scripts written in the performant language to process them saves non-trivial amounts of time, like waiting just 10 minutes versus an hour.

Initial code style in the scripts was rather ugly with a lot of repeated code. But with enough prompting that I reuse the generated code became sufficiently readable and reasonable to quickly check that it is indeed doing what was required and can be manually altered.

But prompting it to do non-trivial changes to existing code base was a time sink. It took too much time to explain/correct the output. And critically the prompts cannot be reused.

Havoc|27 days ago

Same though lately discovered some rough edges in rust with LLM. Sticking a working app into a from scratch container image seems particularly problematic even if you give it the hint that it needs to static link

williamcotton|27 days ago

> Experienced developers were 19% slower when using AI coding assistants—yet believed they were faster

One paper is sure doing a lot of leg work these days...

jfyi|27 days ago

You know, anecdotally...

When I first picked up an agentic coding assistant I was very interested in the process and paid way more attention to it than necessary.

Quickly, I caught myself treating it like a long compilation and getting up to get a coffee and had to self correct this behavior.

I wonder how much novelty of the tech and workflow plays into this number.

newswasboring|27 days ago

Isn't this proposal closely matching with the approach OpenSpec is taking? (Possibly other SDD tool kits, I'm just familiar with this one). I spend way more time in making my spec artifacts (proposal, design, spec, tasks) than I do in code review. During generation of each of these artifacts the code is referenced and surfaces at least some of the issues which are purely architecture based.

28304283409234|27 days ago

I barely use ai as a coding assistant. I use it as a product owner. Works wonders. Especially in this age of clueless product owners.

tankenmate|27 days ago

Some of the conclusions remind of the "ha ha only serious" joke that most people (obviously not the Monks themselves) had about Perl; "write only code". Maybe some of the lessons learnt about how to maintain Perl code might be applicable in this space?

LorenPechtel|25 days ago

I have long said that the fundamental job of a programmer is to translate sloppy requirements into bulletproof logic.

And AI has no concept of this.

andrewstuart|27 days ago

>>> The jury is out on the effectiveness of AI use in production, and it is not a pretty picture.

Errrrr…. false.

I’ll stop reading right there thanks I think I know what’s coming.

raffkede|27 days ago

A Calculator won't increase your creativity directly but it will free resources that you can allocate to creativity!

averrous|26 days ago

will software engineers education be like doctor education? Where we go to school for 6-8 years with multiple steps of tests and certifications to passed before we can touch production code

richardfulop|27 days ago

I always stop reading when I see someone citing that METR study

pseudosavant|27 days ago

Hard to take it serious when it opens with this note: `48% of AI-generated code contains security vulnerabilities (Apiiro, 2024`?

Really? 2024? That was forever ago in LLM coding. Before tool calling, reasoning, and larger context windows.

It is like saying YouTube couldn’t exist because too many people were still on dial up.

geldedus|26 days ago

Cope harder. AI-assisted programming is a huge productivity boost.

verdverm|27 days ago

meh piece, don't feel like I learned anything from it. Mainly words around old stats in a rapidly evolving field, and then trying to pitch their product

tl;dr content marketing

There is this super interesting post in new about agent swarms and how the field is evolving towards formal verification like airlines, or how there are ideas we can draw on. Any, imo it should be on the front over this piece

"Why AI Swarms Cannot Build Architecture"

An analysis of the structural limitations preventing AI agent swarms from producing coherent software architecture

https://news.ycombinator.com/item?id=46866184

locknitpicker|27 days ago

> meh piece, don't feel like I learned anything from it.

That's fine. I found the leading stats interesting. If coding assistants slowed down experienced developers while creating a false sense of development speed then that should be thought-provoking. Also, nearly half of code churned by coding assistants having security issues. That he's tough.

Perhaps it's just me, but that's in line with my personal experience, and I rarely see those points being raised.

> There is this super interesting post in new about agent swarms and how (...)

That's fine. Feel free to submit the link. I find it far more interesting to discuss the post-rose tinted glasses view of coding agents. I don't think it makes any sense at all to laud promises of formal verification when the same technology right now is unable to introduce security vulnerabilities.

zkmon|27 days ago

Wondering why is ths on front page? There is hardly any new insight other than a few minutes of exposure to greenish glow that makes everything looks brownish after you close that page.

heeton|27 days ago

I upvoted because I’m very keen for more teams to start trying to solve this problem and release tools and products to help.

Context gathering and refinement is the biggest issue I have with product development at the moment.