(no title)
madrox
|
1 month ago
It's doesn't work...yet. I agree my stomach churns a little at this sentence. However, paying customers care about reliability and performance. Code review helps that today, but it's only a matter of time before it is more performative than useful in serving those goals at the cost of velocity.
AIorNot|1 month ago
the OP is a kid in his 20s describing the history of the last 3 years or so of small scale AI Development (https://www.linkedin.com/in/silen-naihin/details/experience/)
How does that compare to those of us with 15-50 years of software engineering experience working on giant codebases that have years of domain rules, customers and use cases etc.
When will AI be ready? Microsoft tried to push AI into big enterprise, Anthropic is doing a better job -but its all still in infancy
Personally for me I hope it won't be ready for another 10 years so I can retire before it takes over :)
I remember when folks on HN all called this AI stuff made up
madrox|1 month ago
I do think you're missing how this will likely go down in practice, though. Those giant codebases with years of domain rules are all legacy now. The question is how quickly a new AI codebase could catch up to that code base and overtake it, with all the AI-compatibility best practices baked in. Once that happens, there is no value in that legacy code.
Any prognostication is a fool's errand, but I wouldn't go long on those giant codebases.
tacker2000|1 month ago
No mention of the results when targeting bigger, more complex projects, that require maintainability, sound architectural decisions, etc… which is actually the bread and butter of SW engineering and where the big bucks get made.
bandrami|1 month ago
Ronsenshi|1 month ago
Project was started in late 00s so it has substantial amount of business logic, rules and decisions. Maybe I'm being an old man shouting at the clouds, but I assume (or hope?) it would fail to deliver whatever they promised to the CEO.
So, I guess I'll see the result of this shift soon enough - hopefully at a different company by the time AI-people are done.
onion2k|1 month ago
At most of the companies I've worked at the development team is more like a cluster of individuals who all happen to be contributing to a shared codebase than anything resembling an actual team who collaborate on a shared goal. AI-assisted engineering would have helped massively because the AI would be looking outside of the myopic view any developer who is only focused on their tiny domain in the bigger whole cared about.
Admittedly though, on a genuinely good team it'll be less useful for a long time.
TYPE_FASTER|1 month ago
I have access to Claude Code at work. I integrated it with IntelliJ and let it rip on a legacy codebase that uses two different programming languages plus one of the smaller SCADA platforms plus hardware logic in a proprietary format used by a vendor tool. It was mostly right, probably 80-90%, had a couple mis-understandings. No documentation, I didn't really give it much help, it just kind of...figured it out.
It will be very helpful for refactoring the codebase in the direction we were planning on going, both from the design and maybe implementation perspectives. It's not going to replace anybody, because the product requires having a deep understanding across many disciplines and other external products, and we need technical people to work outside the team with the larger org.
My thinking changes every week. I think it's a mistake to blindly trust the output of the tool. I think it's a mistake to not at least try incorporating it ASAP, just to try it out and take advantage of the tools that everybody else will be adopting or has adopted.
I'm more curious about the impacts on the web: where is the content going to come from? We've seen the downward StackOverflow trend, will people still ask/answer questions there? If not, how will the LLMs learn? I think the adoption of LLMs will eventually drive the adoption of digital IDs. It will just take time.