This whole Copilot vs HUD debate instantly brought to mind a classic Japanese anime from 1991 called Future GPX Cyber Formula (https://en.wikipedia.org/wiki/Future_GPX_Cyber_Formula). Yeah, it’s a racing anime set in the then-distant future of 2015, where cars come with full-on intelligent AIs.
The main character’s car, Asurada, is basically a "Copilot" in every sense. It was designed by his dad to be more than just a tool, more like a partner that learns, adapts, and grows with the driver. Think emotional support plus tactical analysis with a synthetic voice.
Later in the series, his rival shows up driving a car that feels very much like a HUD concept. It's all about cold data, raw feedback, and zero bonding. Total opposite philosophy.
What’s wild is how accurately it captures the trade-offs we’re still talking about in 2025. If you’re into human-AI interaction or just want to see some shockingly ahead-of-its-time design thinking wrapped in early '90s cyber aesthetics, it’s absolutely worth a watch.
I'm very curious if a toggle would be useful that would display a heatmap of a source file showing how surprising each token is to the model. Red tokens are more likely to be errors, bad names, or wrong comments.
Turns out this kind of UI is not only useful to spot bugs, but also allows users to discover implementation choices and design decisions that are obscured by traditional assistant interfaces.
This! That's what I wanted since LLMs learned how to code.
And in fact, I think I saw a paper / blog post that showed exactly this, and then... nothing. For the last few years, the tech world became crazy with code generation, with forks of VSCode hooked to LLMs worth billions of dollars and all that. But AI-based code analysis is remarkably poor. The only thing I have seen resembling this is bug report generators, which is I believe is one of the worst approach.
The idea you have, that I also had and I am sure many thousands of other people had seem so obvious, why is no one talking about it? Is there something wrong with it?
The thing is, using such a feature requires a brain between the keyboard and the chair. A "surprising" token can mean many things: a bug, but also a unique feature, anyways, something you should pay attention to. Too much "green" should also be seen as a signal. Maybe you reinvented the wheel and you should use a library instead, or maybe you failed to take into account a use case specific to your application.
Maybe such tools don't make good marketing. You need to be a competent programmer to use them. It won't help you write more lines faster. It doesn't fit the fantasy of making anyone into a programmer with no effort (hint: learning a programming language is not the hard part). It doesn't generate the busywork of AI 1 introducing bugs for AI 2 to create tickets for.
Love the idea & spitballing ways to generalize to coding..
Thought experiment: as you write code, an LLM generates tests for it & the IDE runs those tests as you type, showing which ones are passing & failing, updating in real time. Imagine 10-100 tests that take <1ms to run, being rerun with every keystroke, and the result being shown in a non-intrusive way.
The tests could appear in a separated panel next to your code, and pass/fail status in the gutter of that panel. As simple as red and green dots for tests that passed or failed in the last run.
The presence or absence and content of certain tests, plus their pass/fail state, tells you what the code you’re writing does from an outside perspective. Not seeing the LLM write a test you think you’ll need? Either your test generator prompt is wrong, or the code you’re writing doesn’t do the things you think they do!
Making it realtime helps you shape the code.
Or if you want to do traditional TDD, the tooling could be reversed so you write the tests and the LLM makes them pass as soon as you stop typing by writing the code.
Humans writing the test first and LLM writing the code is much better than the reverse. And that is because tests are simply the “truth” and “intention” of the code as a contract.
When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.
There's no way this would work for any serious C++ codebase. Compile times alone make this impossible
I'm also not sure how LLM could guess what the tests should be without having written all of the code, e.g. imagine writing code for a new data structure
Then do you need tests to validate your tests are correct, otherwise the LLM might just generate passing code even if the test is bad? Or write code that games the system because it's easier to hardcode an output value then to do the actual work.
There probably is a setup where this works well, but the LLM and humans need to be able to move across the respective boundaries fluidly...
Writing clear requirements and letting the AI take care of the bulk of both sides seems more streamlined and productive.
> Thought experiment: as you write code, an LLM generates tests for it & the IDE runs those tests as you type, showing which ones are passing & failing, updating in real time. Imagine 10-100 tests that take <1ms to run, being rerun with every keystroke, and the result being shown in a non-intrusive way.
I think this is a bad approach. Tests enforce invariants, and they are exactly the type of code we don't want LLMs to touch willy-nilly.
You want your tests to only change if you explicitly want them to, and even then only the tests should change.
Once you adopt that constraint, you'll quickly realize ever single detail of your thought experiment is already a mundane workflow in any developer's day-to-day activities.
Consider the fact that watch mode is a staple of any JavaScript testing framework, and those even found their way into .NET a couple of years ago.
So, your thought experiment is something professional software developers have been doing for what? A decade now?
> Imagine 10-100 tests that take <1ms to run, being rerun with every keystroke, and the result being shown in a non-intrusive way.
Even if this were possible, this seems like an absolutely colossal waste of energy - both the computer's, and my own. Why would I want incomplete tests generated after every keystroke? Why would I test an incomplete if statement or some such?
> Imagine 10-100 tests that take <1ms to run, being rerun with every keystroke, and the result being shown in a non-intrusive way.
Doesn’t seem like high ROI to run full suite of tests on each keystroke. Most keystrokes yield an incomplete program, so you want to be smarter about when you run the tests to get a reasonably good trade off.
That's already part of most IDE's and they know which tests to re-run, because of coverage, so it's really fast.
It also updates the coverage on the fly, you don't even have to look at the test output to know that you've broken something since the tests are not reaching your lines.
Yes the reverse makes much more sense to me. AI help to spec out the software & then the code has an accepted definition of correctness. People focus on this way less than they should I think
Besides generating the tests, automatically running tests on edit and showing the results inline is already a thing. I think it'd be better to do it the other way around, start with the tests and let the LLM implement it until all tests are green. Test driven development.
Absolutely agree, and spellchecker is a great analogy.
I've recently been snoozing co-pilot for hours at a time in VS Code because it’s adding a ton of latency to my keystrokes. Instead, it turns out that `rust_analyzer` is actually all that I need. Go-to definition and hover-over give me exactly what the article describes: extra senses.
Rust is straightforward, but the tricky part may be figuring out what additional “senses” are helpful in each domain. In that way, it seems like adding value with AI comes full circle to being a software design problem.
ChatGPT and Claude are great as assistants for strategizing problems, but even the typeahead value seems to me negligible in a large enough project. My experience with them as "coding agents" is generally that they fail miserably or are regurgitating some existing code base on a well known problem. But they are great at helping config things and as teachers in (the Socratic sense) to help you get up-to-speed with some technical issue.
The heads-up display is the thesis for Tritium[1], going back to its founding. Lawyers' time and attention (like fighter pilots') is critical but they're still required in the cockpit. And there's some argument they always will be.
On the topic of Rust IDE plugins that give you more senses, take a look at Flowistry: https://github.com/willcrichton/flowistry . It's not AI, it's using information flow analysis.
AI building complex visualisations for you on-the-fly seems like a great use-case.
For example, if you are debugging memory leaks in a specific code path, you could get AI to write a visualisation of all the memory allocations and frees under that code path to help you identify the problem. This opens up an interesting new direction where building visualisations to debug specific problems is probably becoming viable.
This idea reminds me of Jonathan Blow's recent talk at LambdaConf. In it, he shows a tool he made to visualise his programs in different ways to help with identifying potential problems. I could imagine AI being good at building these. The talk: https://youtu.be/IdpD5QIVOKQ?si=roTcCcHHMqCPzqSh&t=1108
There’s a lot of ideation for coding HUDs in the comments, but ironically I think the core feature of most coding copilots is already best described as a HUD: tab completion.
And interestingly, that is indeed the feature I find most compelling from Cursor. I particularly love when I’m doing a small refactor, like changing a naming convention for a few variables, and after I make the first edit manually Cursor will jump in with tab suggestions for the rest.
To me, that fully encapsulates the definition of a HUD. It’s a delightful experience, and it’s also why I think anyone who pushes the exclusively-copilot oriented Claude Code as a superior replacement is just wrong.
I've spent the last few months using Claude Code and Cursor - experimenting with both. For simple tasks, both are pretty good (like identifying a bug given console output) - but when it comes to making a big change, like adding a brand new feature to existing code that requires changes to lots of files, writing tests, etc - it often will make at least a few mistakes I catch on review, and then prompting the model to fix those mistakes often causes it to fix things in strange ways.
A few days ago, I had a bug I just couldn't figure out. I prompted Claude to diagnose and fix the issue - but after 5 minutes or so of it trying out different ideas, rerunning the test, and getting stuck just like I did - it just turned off the test and called it complete. If I wasn't watching what it was doing, I could have missed that it did that and deployed bad code.
The last week or so, I've totally switched from relying on prompting to just writing the code myself and using tab complete to autocomplete like 80% of it. It is slower, but I have more control and honestly, it's much more enjoyable of an experience.
I'd love to have something that operates more at the codebase level.
Autocomplete is very local.
(Maybe "tab completion" when setting up a new package in a monorepo? Or make architectural patterns consistent across a whole project? Highlight areas in the codebase where the tests are weak? Or collect on the fly a full view of a path from FE to BE to DB?)
Doesn't it all come down to "what is the ideal interface for humans to deal with digital information"?
We're getting more and more information thrown at us each day, and the AIs are adding to that, not reducing it. The ability to summarise dense and specialist information (I'm thinking error logs, but could be anything really) just means more ways for people to access and view that information who previously wouldn't.
How do we, as individuals, best deal with all this information efficiently? Currently we have a variety of interfaces, websites, dashboards, emails, chat. Are all these necessary anymore? They might be now, but what about the next 10 years. Do I even need to visit a companies website if can get the same information from some single chat interface?
The fact we have AIs building us websites, apps, web UI's just seems so...redundant.
Websites were a way to get authoritative information about a company, from that company (or another trusted source like Wikipedia). That trust is powerful, which is why we collectively spent so much time trying to educate users about the "line of death" in browsers, drawing padlock icons, chasing down impersonator sites, mitigating homoglyph attacks, etc. This all rested on the assumption that certain sites were authoritative sources of information worth seeking out.
I'm not really sure what trust means in a world where everyone relies uncritically on LLM output. Even if the information from the LLM is usually accurate, can I rely on that in some particularly important instance?
The designers of 6th gen fighter jets are confronting the same challenge. The cockpit, which is an interface between the pilot and the airframe, will be optionally manned. If the cockpit is manned, the pilot will take on a reduced set of roles focused on higher-level decision making.
By the 7th generation it's hard to see how humans will still be value-add, unless it's for international law reasons to keep a human in the loop before executing the kill chain, or to reduce Skynet-like tail risks in line with Paul Christiano's arms race doom scenario.
Perhaps interfaces in every domain will evolve this way. The interface will shrink in complexity, until it's only humans describing what they want to the system, at higher and higher levels of abstraction. That doesn't necessarily have to be an English-language interface if precision in specification is required.
Computers / the web are a fast route to information, but they are also a storehouse of information; a ledger. This is the old "information wants to be free, and also very expensive." I don't want all the info on my PC, or the bank database, to be 'alive', I want it to be frozen in kryptonite, so it's right where I left it when I come back.
I think we're slowly allowing AI access to the interface layer, but not to the information layer, and hopefully we'll figure out how to keep it that way.
I think one key reason HUDs haven’t taken off more broadly is the fundamental limitation of our current display medium - computer screens and mobile devices are terrible at providing ambient, peripheral information without being intrusive. When I launch an AI agent to fix a bug or handle a complex task, there’s this awkward wait time where it takes too long for me to sit there staring at the screen waiting for output, but it’s too short for me to disengage and do something else meaningful. A HUD approach would give me a much shorter feedback loop. I could see what the AI is doing in my peripheral vision and decide moment-to-moment whether to jump in and take over the coding myself, or let the agent continue while I work on something else. Instead of being locked into either “full attention on the agent” or “completely disengaged,” I’d have that ambient awareness that lets me dynamically choose my level of involvement. This makes me think VR/AR could be the killer application for AI HUDs. Spatial computing gives us the display paradigm where AI assistance can be truly ambient rather than demanding your full visual attention on a 2D screen. I picture that this would be especially helpful for help on more physical tasks, such as cooking, or fixing a bike.
You just described what I do with my ultrawide monitor and laptop screen.
I can be fully immersed in a game or anything and keep Claude in a corner of a tmux window next to a browser on the other monitor and jump in whenever I see it get to the next step or whatever.
The only real life usage of any kind of HUD I can imagine at the moment is navigation, and I have only ever used that (or other car related things) as something I selectively look at, never felt like it's something I need to have in sight at all times.
That said, the best GUI is the one you don't notice, so uh... I can't actually name anything else, it's probably deeply engrained in my computer usage.
About a decade back Bret Victor [1] talked about how his principle in life is to reduce the delay in feedback, and having faster iteration cycles not just helps in doing things (coding) better but also contributes to new creative insights. He had a bunch of examples built to showcase alternative ways of coding, which is very close to being HUDs - one example shown in the OP is very similar to the one he presents to "step through time to figure out the working of the code".
A HUD is an even more "confident" display of data than text though. What do you do with a HUD that hallucinates? Is there a button on each element that shows you sources?
Night-vision optics come to mind: prone to noise and visual artifacts, and especially strange under certain edge conditions. Some of their specs tend to be strictly inferior to a Mark I Eyeball—narrow FOV, limited focusing power, whatever else.
But an operator learns to intuit which aspects to trust and which to double-check. The fact that it’s an “extra sense” can outweigh the fact that it’s not a perfect source of truth, no? Trust the tech where it proves useful to you, and find ways to compensate (or outright don’t use it) where it’s not.
I could imagine a system in which the model only chooses which data points to show at what time, but the actual passing is still handled by good old deterministic programming.
We also need what goes along with HUDs : switches, nubs, switches, dials. Actual controls.
Although we are talking HUDs, I'm not really talking about UI widgets having the good old skew-morphism or better buttons. In the cockpit the pilot doesn't have his controls on a touch screen, he has an array of buttons and dials and switches all around him. It's these controls that are used in response to what the pilot sees on the HUD and it's these controls that change the aircraft according to the pilots will, which in turn change what the HUD shows.
Actually defining those situations and collecting the data (which should help identify those situations) are the hard parts. Having an autonomous system that do it has been solved for ages.
> anyone serious about designing for AI should consider non-copilot form factors that more directly extend the human mind.
Aren't auto-completes doing exactly this? It's not a co-pilot in the sense of a virtual human, but already more in the direction of a HUD.
Sure you can converse with LLMs but you can also clearly just send orders and they eagerly follow and auto-complete.
I think what the author might be trying to express in a quirky fashion, is that AI should work alongside us, looking in the same direction as we are, and not being opposite to us at the table, staring at each other's and arguing. We'll have true AI when they'll be doing our bidding, without any interaction from us.
Author here. Yes, I think the original GitHub Copilot autocomplete UI is (ironically) a good example of a HUD! Tab autocomplete just becomes part of your mental flow.
Recent coding interfaces are all trending towards chat agents though.
It’s interesting to consider what a “tab autocomplete” UI for coding might look like at a higher level of abstraction, letting you mold code in a direct-feeling way without being bogged down in details.
The current paradigm is driven by two factors: one is the reliability of the models and that constraints how much autonomy you can give to an agent. Second is about chat as a medium which everyone went to because ChatGPT became a thing.
I see the value in HUDs, but only when you can be sure output is correct. If that number is only 80% or so, copilots work better so that humans in the loop can review and course correct - the pair programmer/worker. This is not to say we need ai to get to higher levels of correctness inherently, just that systems deployed need to do so before they display some information on HUD.
This is missing the addictive/engaging part of a conversational interface for most people out there. Which is in line with the critics highlighted in the fine article.
Just because most people are fond of it doesn't actually mean it improves their life, goals and productivity.
I think the challenge is primarily the context and intent.
The spellchecker knows my context easily, and there is a setting to choose from (American English, British English, etc.), as well as the paragraphs I'm writing. The intent is easy to recognise. While in a codebase, the context is longer and vaguer, the assistant would hardly know why I'm changing a function and how that impacts the rest of the codebase.
However, as the article mentions, it may not be a universal solution, but it's a perspective to consider when designing AI systems.
This is how ship's AI is depicted in The Expanse (TV series) and I think it's really compelling. Quiet and unobtrusive, but Alex can ask the Rocinante to plot a new course or display the tactical situation and it's fast, effective and effortlessly superhuman with no back-talk or unnecessary personality.
Compare another sci-fi depiction taken to the opposite extreme: Sirius Cybernetics products in the Hitchhikers Guide books. "Thank you for making a simple door very happy!"
I may remember wrongly, but I don't believe the expanse was depicting AI. It was more powerful computation and unobtrusive interfaces. There was nothing like Jarvis. Rocinante was a war vessel and all its feature was tailored to that. I believe even the mechanical suits of the Martians were very manual (no cortana a la Master Chief).
On this topic, can anyone find a document I saw on HN but can no longer locate?
A historical computing essay, it was presented in a plaintext (monospaced) text page. It outlined a computer assistant and how it should feel to use. The author believed it should be unobtrusive, something that pops into awareness when needed and then gets back out of the way. I don't believe any of the references in TFA are what it was.
Great post! i've been thinking along similar lines about human-AI interfaces beyond the copilot paradigm. I see two major patterns emerging:
Orchestration platforms - Evolution of tools like n8n/Make into cybernetic process design systems where each node is an intelligent agent with its own optimization criteria. The key insight: treat processes as processes, not anthropomorphize LLMs as humans. Build walls around probabilistic systems to ensure deterministic outcomes where needed. This solves massive "communication problems"
Oracle systems - AI that holds entire organizations in working memory, understanding temporal context and extracting implicit knowledge from all communications. Not just storage but active synthesis. Imagine AI digesting every email/doc/meeting to build a living organizational consciousness that identifies patterns humans miss and generates strategic insights.
We have explored that sort of debugging/visualization tool in https://pernos.co. We built it before the age of genAI, but I think for coming up with powerful visualizations AI is neither necessary nor (yet) sufficient.
[+] [-] kevinphy|7 months ago|reply
The main character’s car, Asurada, is basically a "Copilot" in every sense. It was designed by his dad to be more than just a tool, more like a partner that learns, adapts, and grows with the driver. Think emotional support plus tactical analysis with a synthetic voice.
Later in the series, his rival shows up driving a car that feels very much like a HUD concept. It's all about cold data, raw feedback, and zero bonding. Total opposite philosophy.
What’s wild is how accurately it captures the trade-offs we’re still talking about in 2025. If you’re into human-AI interaction or just want to see some shockingly ahead-of-its-time design thinking wrapped in early '90s cyber aesthetics, it’s absolutely worth a watch.
[+] [-] furyofantares|7 months ago|reply
[+] [-] teoremma|7 months ago|reply
Turns out this kind of UI is not only useful to spot bugs, but also allows users to discover implementation choices and design decisions that are obscured by traditional assistant interfaces.
Very exciting research direction!
[+] [-] GuB-42|7 months ago|reply
And in fact, I think I saw a paper / blog post that showed exactly this, and then... nothing. For the last few years, the tech world became crazy with code generation, with forks of VSCode hooked to LLMs worth billions of dollars and all that. But AI-based code analysis is remarkably poor. The only thing I have seen resembling this is bug report generators, which is I believe is one of the worst approach.
The idea you have, that I also had and I am sure many thousands of other people had seem so obvious, why is no one talking about it? Is there something wrong with it?
The thing is, using such a feature requires a brain between the keyboard and the chair. A "surprising" token can mean many things: a bug, but also a unique feature, anyways, something you should pay attention to. Too much "green" should also be seen as a signal. Maybe you reinvented the wheel and you should use a library instead, or maybe you failed to take into account a use case specific to your application.
Maybe such tools don't make good marketing. You need to be a competent programmer to use them. It won't help you write more lines faster. It doesn't fit the fantasy of making anyone into a programmer with no effort (hint: learning a programming language is not the hard part). It doesn't generate the busywork of AI 1 introducing bugs for AI 2 to create tickets for.
[+] [-] cadamsdotcom|7 months ago|reply
Thought experiment: as you write code, an LLM generates tests for it & the IDE runs those tests as you type, showing which ones are passing & failing, updating in real time. Imagine 10-100 tests that take <1ms to run, being rerun with every keystroke, and the result being shown in a non-intrusive way.
The tests could appear in a separated panel next to your code, and pass/fail status in the gutter of that panel. As simple as red and green dots for tests that passed or failed in the last run.
The presence or absence and content of certain tests, plus their pass/fail state, tells you what the code you’re writing does from an outside perspective. Not seeing the LLM write a test you think you’ll need? Either your test generator prompt is wrong, or the code you’re writing doesn’t do the things you think they do!
Making it realtime helps you shape the code.
Or if you want to do traditional TDD, the tooling could be reversed so you write the tests and the LLM makes them pass as soon as you stop typing by writing the code.
[+] [-] callc|7 months ago|reply
When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.
[+] [-] William_BB|7 months ago|reply
I'm also not sure how LLM could guess what the tests should be without having written all of the code, e.g. imagine writing code for a new data structure
[+] [-] cjonas|7 months ago|reply
There probably is a setup where this works well, but the LLM and humans need to be able to move across the respective boundaries fluidly...
Writing clear requirements and letting the AI take care of the bulk of both sides seems more streamlined and productive.
[+] [-] motorest|7 months ago|reply
I think this is a bad approach. Tests enforce invariants, and they are exactly the type of code we don't want LLMs to touch willy-nilly.
You want your tests to only change if you explicitly want them to, and even then only the tests should change.
Once you adopt that constraint, you'll quickly realize ever single detail of your thought experiment is already a mundane workflow in any developer's day-to-day activities.
Consider the fact that watch mode is a staple of any JavaScript testing framework, and those even found their way into .NET a couple of years ago.
So, your thought experiment is something professional software developers have been doing for what? A decade now?
[+] [-] squigz|7 months ago|reply
Even if this were possible, this seems like an absolutely colossal waste of energy - both the computer's, and my own. Why would I want incomplete tests generated after every keystroke? Why would I test an incomplete if statement or some such?
[+] [-] andsoitis|7 months ago|reply
Doesn’t seem like high ROI to run full suite of tests on each keystroke. Most keystrokes yield an incomplete program, so you want to be smarter about when you run the tests to get a reasonably good trade off.
[+] [-] federiconafria|7 months ago|reply
It also updates the coverage on the fly, you don't even have to look at the test output to know that you've broken something since the tests are not reaching your lines.
https://gavindraper.com/2020/05/27/VS-Code-Continious-Testin...
[+] [-] scottgg|7 months ago|reply
https://wallabyjs.com/
[+] [-] hnthrowaway121|7 months ago|reply
[+] [-] Cthulhu_|7 months ago|reply
[+] [-] piker|7 months ago|reply
I've recently been snoozing co-pilot for hours at a time in VS Code because it’s adding a ton of latency to my keystrokes. Instead, it turns out that `rust_analyzer` is actually all that I need. Go-to definition and hover-over give me exactly what the article describes: extra senses.
Rust is straightforward, but the tricky part may be figuring out what additional “senses” are helpful in each domain. In that way, it seems like adding value with AI comes full circle to being a software design problem.
ChatGPT and Claude are great as assistants for strategizing problems, but even the typeahead value seems to me negligible in a large enough project. My experience with them as "coding agents" is generally that they fail miserably or are regurgitating some existing code base on a well known problem. But they are great at helping config things and as teachers in (the Socratic sense) to help you get up-to-speed with some technical issue.
The heads-up display is the thesis for Tritium[1], going back to its founding. Lawyers' time and attention (like fighter pilots') is critical but they're still required in the cockpit. And there's some argument they always will be.
[1] https://news.ycombinator.com/item?id=44256765 ("an all-in-one drafting cockpit")
[+] [-] kibwen|7 months ago|reply
[+] [-] sothatsit|7 months ago|reply
For example, if you are debugging memory leaks in a specific code path, you could get AI to write a visualisation of all the memory allocations and frees under that code path to help you identify the problem. This opens up an interesting new direction where building visualisations to debug specific problems is probably becoming viable.
This idea reminds me of Jonathan Blow's recent talk at LambdaConf. In it, he shows a tool he made to visualise his programs in different ways to help with identifying potential problems. I could imagine AI being good at building these. The talk: https://youtu.be/IdpD5QIVOKQ?si=roTcCcHHMqCPzqSh&t=1108
[+] [-] _jab|7 months ago|reply
And interestingly, that is indeed the feature I find most compelling from Cursor. I particularly love when I’m doing a small refactor, like changing a naming convention for a few variables, and after I make the first edit manually Cursor will jump in with tab suggestions for the rest.
To me, that fully encapsulates the definition of a HUD. It’s a delightful experience, and it’s also why I think anyone who pushes the exclusively-copilot oriented Claude Code as a superior replacement is just wrong.
[+] [-] cleverwebble|7 months ago|reply
I've spent the last few months using Claude Code and Cursor - experimenting with both. For simple tasks, both are pretty good (like identifying a bug given console output) - but when it comes to making a big change, like adding a brand new feature to existing code that requires changes to lots of files, writing tests, etc - it often will make at least a few mistakes I catch on review, and then prompting the model to fix those mistakes often causes it to fix things in strange ways.
A few days ago, I had a bug I just couldn't figure out. I prompted Claude to diagnose and fix the issue - but after 5 minutes or so of it trying out different ideas, rerunning the test, and getting stuck just like I did - it just turned off the test and called it complete. If I wasn't watching what it was doing, I could have missed that it did that and deployed bad code.
The last week or so, I've totally switched from relying on prompting to just writing the code myself and using tab complete to autocomplete like 80% of it. It is slower, but I have more control and honestly, it's much more enjoyable of an experience.
[+] [-] Garlef|7 months ago|reply
I'd love to have something that operates more at the codebase level. Autocomplete is very local.
(Maybe "tab completion" when setting up a new package in a monorepo? Or make architectural patterns consistent across a whole project? Highlight areas in the codebase where the tests are weak? Or collect on the fly a full view of a path from FE to BE to DB?)
[+] [-] hi_hi|7 months ago|reply
We're getting more and more information thrown at us each day, and the AIs are adding to that, not reducing it. The ability to summarise dense and specialist information (I'm thinking error logs, but could be anything really) just means more ways for people to access and view that information who previously wouldn't.
How do we, as individuals, best deal with all this information efficiently? Currently we have a variety of interfaces, websites, dashboards, emails, chat. Are all these necessary anymore? They might be now, but what about the next 10 years. Do I even need to visit a companies website if can get the same information from some single chat interface?
The fact we have AIs building us websites, apps, web UI's just seems so...redundant.
[+] [-] AlotOfReading|7 months ago|reply
I'm not really sure what trust means in a world where everyone relies uncritically on LLM output. Even if the information from the LLM is usually accurate, can I rely on that in some particularly important instance?
[+] [-] energy123|7 months ago|reply
By the 7th generation it's hard to see how humans will still be value-add, unless it's for international law reasons to keep a human in the loop before executing the kill chain, or to reduce Skynet-like tail risks in line with Paul Christiano's arms race doom scenario.
Perhaps interfaces in every domain will evolve this way. The interface will shrink in complexity, until it's only humans describing what they want to the system, at higher and higher levels of abstraction. That doesn't necessarily have to be an English-language interface if precision in specification is required.
[+] [-] elendee|7 months ago|reply
I think we're slowly allowing AI access to the interface layer, but not to the information layer, and hopefully we'll figure out how to keep it that way.
[+] [-] moomoo11|7 months ago|reply
[+] [-] guardiang|7 months ago|reply
[+] [-] sipjca|7 months ago|reply
[+] [-] ravila4|7 months ago|reply
[+] [-] elliotec|7 months ago|reply
I can be fully immersed in a game or anything and keep Claude in a corner of a tmux window next to a browser on the other monitor and jump in whenever I see it get to the next step or whatever.
[+] [-] Cthulhu_|7 months ago|reply
That said, the best GUI is the one you don't notice, so uh... I can't actually name anything else, it's probably deeply engrained in my computer usage.
[+] [-] kn81198|7 months ago|reply
[1]: https://www.youtube.com/watch?v=PUv66718DII
[+] [-] latorf|7 months ago|reply
[+] [-] afro88|7 months ago|reply
[+] [-] alwa|7 months ago|reply
But an operator learns to intuit which aspects to trust and which to double-check. The fact that it’s an “extra sense” can outweigh the fact that it’s not a perfect source of truth, no? Trust the tech where it proves useful to you, and find ways to compensate (or outright don’t use it) where it’s not.
[+] [-] Dilettante_|7 months ago|reply
[+] [-] thinkingemote|7 months ago|reply
Although we are talking HUDs, I'm not really talking about UI widgets having the good old skew-morphism or better buttons. In the cockpit the pilot doesn't have his controls on a touch screen, he has an array of buttons and dials and switches all around him. It's these controls that are used in response to what the pilot sees on the HUD and it's these controls that change the aircraft according to the pilots will, which in turn change what the HUD shows.
[+] [-] benjaminwootton|7 months ago|reply
It can detect situations intelligently, do the filtering, summarisation of what’s happening and possibly a recommendation.
This feels a lot more natural to me, especially in a business context when you want to monitor for 100 situations about thousands of customers.
[+] [-] skydhash|7 months ago|reply
[+] [-] keyle|7 months ago|reply
Aren't auto-completes doing exactly this? It's not a co-pilot in the sense of a virtual human, but already more in the direction of a HUD.
Sure you can converse with LLMs but you can also clearly just send orders and they eagerly follow and auto-complete.
I think what the author might be trying to express in a quirky fashion, is that AI should work alongside us, looking in the same direction as we are, and not being opposite to us at the table, staring at each other's and arguing. We'll have true AI when they'll be doing our bidding, without any interaction from us.
[+] [-] gklitt|7 months ago|reply
Recent coding interfaces are all trending towards chat agents though.
It’s interesting to consider what a “tab autocomplete” UI for coding might look like at a higher level of abstraction, letting you mold code in a direct-feeling way without being bogged down in details.
[+] [-] ankit219|7 months ago|reply
I see the value in HUDs, but only when you can be sure output is correct. If that number is only 80% or so, copilots work better so that humans in the loop can review and course correct - the pair programmer/worker. This is not to say we need ai to get to higher levels of correctness inherently, just that systems deployed need to do so before they display some information on HUD.
[+] [-] psychoslave|7 months ago|reply
Just because most people are fond of it doesn't actually mean it improves their life, goals and productivity.
[+] [-] Animats|7 months ago|reply
[1] https://marshallbrain.com/manna1
[2] https://www.six-15.com/vision-picking
[+] [-] Oras|7 months ago|reply
I think the challenge is primarily the context and intent.
The spellchecker knows my context easily, and there is a setting to choose from (American English, British English, etc.), as well as the paragraphs I'm writing. The intent is easy to recognise. While in a codebase, the context is longer and vaguer, the assistant would hardly know why I'm changing a function and how that impacts the rest of the codebase.
However, as the article mentions, it may not be a universal solution, but it's a perspective to consider when designing AI systems.
[+] [-] jpm_sd|7 months ago|reply
Compare another sci-fi depiction taken to the opposite extreme: Sirius Cybernetics products in the Hitchhikers Guide books. "Thank you for making a simple door very happy!"
[+] [-] skydhash|7 months ago|reply
[+] [-] samfriedman|7 months ago|reply
[+] [-] henriquegodoy|7 months ago|reply
Orchestration platforms - Evolution of tools like n8n/Make into cybernetic process design systems where each node is an intelligent agent with its own optimization criteria. The key insight: treat processes as processes, not anthropomorphize LLMs as humans. Build walls around probabilistic systems to ensure deterministic outcomes where needed. This solves massive "communication problems"
Oracle systems - AI that holds entire organizations in working memory, understanding temporal context and extracting implicit knowledge from all communications. Not just storage but active synthesis. Imagine AI digesting every email/doc/meeting to build a living organizational consciousness that identifies patterns humans miss and generates strategic insights.
just explored more about it on my personal blog https://henriquegodoy.com/blog/stream-of-consciousness
[+] [-] roca|7 months ago|reply