(no title)
segphault | 9 months ago
There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
Some comments were deferred for faster rendering.
jstummbillig|9 months ago
I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)
I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.
ssalazar|9 months ago
Can you point to _any_ evidence to support that human software development abilities will be eclipsed by LLMs other than trying to predict which part of the S-curve we're on?
DanHulton|9 months ago
Citation needed. In fact, I think this pretty clearly hits the "extraordinary claims require extraordinary evidence" bar.
sirstoke|9 months ago
Isn’t software engineering a lot more than just writing code? And I mean like, A LOT more?
Informing product roadmaps, balancing tradeoffs, understanding relationships between teams, prioritizing between separate tasks, pushing back on tech debt, responding to incidents, it’s a feature and not a bug, …
I’m not saying LLMs will never be able to do this (who knows?), but I’m pretty sure SWEs won’t be the only role affected (or even the most affected) if it comes to this point.
Where am I wrong?
acedTrex|9 months ago
In what world is this statement remotely true.
mattgreenrocks|9 months ago
* "it's too hard!"
* "my coworkers will just ruin it"
* "startups need to pursue PMF, not architecture"
* "good design doesn't get you promoted"
And now we have "AI will do it better soon."
None of those are entirely wrong. They're not entirely correct, either.
davidsainez|9 months ago
I'm not convinced that they can reason effectively (see the ARC-AGI-2 benchmarks). Doesn't mean that they are not useful, but they have their limitations. I suspect we still need to discover tech distinct from LLMs to get closer to what a human brain does.
jjice|9 months ago
FWIW, I think you're probably right that we need to adapt, but there was no explanation as to _why_ you believe that that's the case.
concats|9 months ago
However, in real life work situations, that 'perfect information' prerequisite will be a big hurdle I think. Design can depend on any number of vague agreements and lots of domain specific knowledge, things a senior software architect has only learnt because they've been at the company for a long time. It will be very hard for a LLM to take all the correct decisions without that knowledge.
Sure, if you write down a summary of each and every meeting you've attended for the past 12 months, as well as attach your entire company confluence, into the prompt, perhaps then the LLM can design the right architecture. But is that realistic?
More likely I think the human will do the initial design and specification documents, with the aforementioned things in mind, and then the LLM can do the rest of the coding.
Not because it would have been technically impossible for the LLM to do the code design, but because it would have been practically impossible to craft the correct prompt that would have given the desired result from a blank sheet.
liefde|9 months ago
The fear that machines will surpass us in design, architecture, or even intuition is not just technical. It is existential. It touches our identity, our worth, our place in the unfolding story of intelligence.
But what if the invitation is not to compete, but to co-create? To stop asking what we are better at, and start asking what we are becoming.
The grief of letting go of old roles is real. So is the joy of discovering new ones. The future is not a threat. It is a mirror.
epolanski|9 months ago
Which person it is? Because 90% of the people in our trade are bad, like, real bad.
I get that people on HN are in that elitist niche of those who care more, focus on career more, etc so they don't even realize the existence of armies of low quality body rental consultancies and small shops out there working on Magento or Liferay or even worse crap.
bayindirh|9 months ago
No code & AI assisted programming has been told to be around the corner since 2000. We just arrived to a point where models remix what others have typed on their keyboards, and yet somebody still argues that humans will be left in the dust in near times.
No machine, incl. humans can create something more complex than itself. This is the rule of abstraction. As you go higher level, you lose expressiveness. Yes, you express more with less, yet you can express less in total. You're reducing the set's symbol size (element count) as you go higher by clumping symbols together and assigning more complex meanings to it.
Yet, being able to describe a larger set with more elements while keeping all elements addressable with less possible symbols doesn't sound plausible to me.
So, as others said. Citation needed. Extraordinary claims needs extraordinary evidence. No, asking AI to create a premium mobile photo app and getting Halide's design as an output doesn't count. It's training data leakage.
bdangubic|9 months ago
Our entire industry (after all these years) does not have even remotely sane measure or definition as what is good code design. Hence, this statement is dead on arrival as you are claiming something that cannot be either proven or disproven by anyone.
irjustin|9 months ago
I wouldn't worry about it because, as you say, "in a new world". The old will simply "die".
We're in the midsts of a paradigm shift and it's here to stay. The key is the speed at which it hit and how much it changed. GPT3 overnight changed the game and huge chunks of people are mentally struggling to keep up - in particular education.
But people who resist AI will become the laggards.
avhception|9 months ago
fullstackchris|9 months ago
Workaccount2|9 months ago
I think there is a total seismic change in software that is about to go down, similar to something like going from gas lamps to electric. Software doesn't need to be the way it is now anymore, since we have just about solved human language to computer interface translation. I don't want to fuss with formatting a word document anymore, I would rather just tell and LLM and let it modify the program memory to implement what I want.
pmarreck|9 months ago
LOLLLLL. You see a good one-shot demo and imagine an upward line, I work with LLM assistance every day and see... an asymptote (which is only budged by exponential power expenditure). As they say in sailing, you'll never win the race by following the guy in front of you... which is exactly what every single LLM does: Do a sophisticated modeling of prior behavior. Innovation is not their strong suit LOL.
Perfect example- I cannot for the life of me get any LLM to stick with TDD building one feature at a time, which I know builds superior code (both as a human, and as an LLM!). Prompting will get them to do it for one or two cycles and then start regressing to the crap mean. Because that's what it was trained on. And it's the rare dev that can stick with TDD for whatever reason, so that's exactly what the LLM does. Which is absolutely subpar.
I'm not even joking, every single coding LLM would improve immeasurably if the model was refined to just 1) make a SINGLE test expectation, 2) watch it fail (to prove the test is valid), 3) build a feature, 4) work on it until the test passed, 5) repeat until app requirements are done. Anything already built that was broken by the new work would be highlighted by the unit test suite immediately and would be able to be fixed before the problem gets too complex.
LLM's also often "lose the plot", and that's not even a context limit problem, they just aren't conscious or have wills so their work eventually drifts off course or goes into these weird flip-flip states.
But sure, with an infinite amount of compute and an infinite amount of training data, anything is possible.
dan_lannan|9 months ago
My worst experiences with LLMs coding are from my own mistakes giving it the wrong intent. Inconsistent test cases. Laziness in explaining or even knowing what I actually want.
Architecture and abstraction happen in someone’s mind to be able to communicate intent. If intent is the bottleneck it will still come down to a human imagining the abstraction in their head.
I’d be willing to bet abstraction and architecture becomes the only thing left for humans to do.
pjmlp|9 months ago
A few humans will stay around to keep the robots going, a lesser few humans will be the elite allowed to create the robots, and everyone else will have to look for a job elsewhere, where increasingly robots and automated systems are decreasing opportunities.
I am certainly glad to be closer to retirement than early career.
uludag|9 months ago
I don't know this sentiment would be considered worrisome. The situation itself seems more worrisome. If people do end up being beaten on code design next year, there's not much that could be done anyways. If LLMs reach such capability, the automation tools will be developed and if effective, they'll be deployed en masse.
If the situation you've described comes, pondering the miraculousness of the new world brought by AI would be a pretty fruitless endeavor for the average developer (besides startup founders perhaps). It would be much better to focus on achieving job security and accumulating savings for any layoff.
Quite frankly, I have a feeling that deglobalisation, disrupted supply chains, climate change, aging demographics, global conflict, mass migration, etc. will leave a much larger print on this new world than any advance in AI will.
solumunus|9 months ago
The timeline could easily be 50 or 100 years. No emerging development of technology is resistant to diminishing returns and it seems highly likely that novel breakthroughs, rather than continuing LLM improvement, are required to reach that next step of reasoning.
StefanBatory|9 months ago
Can't really prepare for that unless you switch to a different career... Ideally, with manual labor. As automation might be still too expensive :P
linsomniac|9 months ago
joshjob42|9 months ago
EGreg|9 months ago
saurik|9 months ago
Jordan-117|9 months ago
darepublic|9 months ago
perching_aix|9 months ago
[0] https://docs.aws.amazon.com/IAM/latest/UserGuide/access_poli...
[1] https://docs.aws.amazon.com/aws-managed-policy/latest/refere...
[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/access_poli...
floydnoel|9 months ago
instead of ignoring the duplicates, when i query different models, i use the duplicates as a signal that something might be more accurate. i wonder what your results might have looked like if you only kept the duplicated permissions and went from there.
dotancohen|9 months ago
mark_l_watson|9 months ago
I just dropped version 0.1 of my Gemini book, and I have an example for making a Gem (really simple to do); read online link:
https://leanpub.com/solo-ai/read
siscia|9 months ago
The unfortunate state of open source funding makes buildings such simple tool a loosing adventure unfortunately.
satvikpendem|9 months ago
doug_durham|9 months ago
mynameisvlad|9 months ago
codebolt|9 months ago
M4v3R|9 months ago
yousif_123123|9 months ago
The models are very impressive. But issues like these still make me feel they are still more pattern matching (although there's also some magic, don't get me wrong) but not fully reasoning over everything correctly like you'd expect of a typical human reasoner.
disgruntledphd2|9 months ago
And that's fine and useful.
toomuchtodo|9 months ago
redox99|9 months ago
Volundr|9 months ago
Are we sure they know these things as opposed to being able to consistently guess correctly? With LLMs I'm not sure we even have a clear definition of what it means for it to "know" something.
rdtsc|9 months ago
They are the perfect "fake it till you make it" example cranked up to 11. They'll bullshit you, but will do it confidently and with proper grammar.
> Many attempts at making them refuse to answer what they don't know caused them to refuse to answer things they did in fact know.
I can see in some contexts that being desirable if it can be a parameter that can be tweaked. I guess it's not that easy, or we'd already have it.
bezier-curve|9 months ago
unknown|9 months ago
[deleted]
mountainriver|9 months ago
mbesto|9 months ago
- Determining what features to make for users
- Forecasting out a roadmap that are aligned to business goals
- Translating and prioritizing all of these to a developer (regardless of whether these developers are agentic or human)
Coincidentally these are the areas that frequently are the largest contributors to software businesses successes....not wether you use NextJs with a Go and Elixir backend against a multi-geo redundant multi sharded CockroachDB database, or that your code is clean/elegant.
dist-epoch|9 months ago
At half of the companies you can randomly pick those three things and probably improve the situation. Using an AI would be a massive improvement.
nearbuy|9 months ago
jug|9 months ago
mountainriver|9 months ago
ChocolateGod|9 months ago
To my surprise, Gemini got it spot on first time.
fwip|9 months ago
Tainnor|9 months ago
I asked it a complicated question about the Scala ZIO framework that involved subtyping, type inference, etc. - something that would definitely be hard to figure out just from reading the docs. The first answer it gave me was very detailed, very convincing and very wrong. Thankfully I noticed it myself and was able to re-prompt it and I got an answer that is probably right. So it was useful in the end, but only because I realised that the first answer was nonsense.
alex1138|9 months ago
There has to be some kind of recursive error checking thing, or something
0x457|9 months ago
tastysandwich|9 months ago
Regexes are another area where I can't get much help from LLMs. If it's something common like a phone number, that's fine. But anything novel it seems to have trouble. It will spit out junk very confidently.
robinei|9 months ago
jppittma|9 months ago
epaga|9 months ago
tough|9 months ago
internet also helps.
Also having markdown files with the stack etc and any -rules-
satvikpendem|9 months ago
Rudybega|9 months ago
viraptor|9 months ago
What do you mean specifically? I found the "let's write a spec, let's make a plan, implement this step by step with testing" results in basically the same approach to design/architecture that I would take.
pzo|9 months ago
https://github.com/upstash/context7
onlyrealcuzzo|9 months ago
One area I've still noticed weakness is if you want to use a pretty popular library from one language in another language, it has a tendency to think the function signatures in the popular language match the other.
Naively, this seems like a hard problem to solve.
I.e. ask it how to use torchlib in Ruby instead of Python.
froh|9 months ago
so it's a great tool in the hands of a creative architect, but it is not one in and by itself and I don't see yet how it can be.
my pet theory is that the human brain can't understand and formalize its creativity because you need a higher order logic to fully capture some other logic. I've been contested that the second Gödel incompleteness theorem "can't be applied like this to the brain" but I stubbornly insist yes, the brain implements _some_ formal system and it can't understand how that system works. tongue in cheek, somewhat, maybe.
but back to earth I agree llms are a great tool for a creative human mind.
dist-epoch|9 months ago
> If you think his theorem limits human knowledge, think again
https://www.youtube.com/watch?v=OH-ybecvuEo
breuleux|9 months ago
I would argue that the second incompleteness theorem doesn't have much relevance to the human brain, because it is trying to prove a falsehood. The brain is blatantly not a consistent system. It is, however, paraconsistent: we are perfectly capable of managing a set of inconsistent premises and extracting useful insight from them. That's a good thing.
It's also true that we don't understand how our own brain works, of course.
ksec|9 months ago
Foreignborn|9 months ago
Usually I’m using a minimum of 200k tokens to start with gemini 2.5.
pizza|9 months ago
mattlondon|9 months ago
But I wonder when we'll be happy? Do we expect colleagues friends and family to be 100% laser-accurate 100% of the time? I'd wager we don't. Should we expect that from an artificial intelligence too?
ookblah|9 months ago
at least for 90% of the CRUD apps out there, you can def abstract away the entire base framework of getting, listing, and updating records. i guess the problem is validating that data for use in other more complex workflows.
bruce511|9 months ago
Of course that last 10% does a lot of heavy lifting. Domain expertise, program and database design, sales, support, actually processing the data for more than just simple reports, and so on.
And sure, the code is not maximally efficient in all cases, but it is consistent, and deterministic. Which is all I need from my code generator.
I see a lot of panic from programmers (outside our space) who worry about their futures. As if programming is the ultimate career goal. When really, writing code is the least interesting, and least valuable part of developing software.
Maybe LLMs will code software for you. Maybe they already do. And, yes, despite their mistakes it's very impressive. And yes, it will get better.
But they are miles away from replacing developers- unless your skillset is limited to "coding" there's no need to worry.
johnisgood|9 months ago
Tell me about it. Thankfully I have not experienced it as much with Claude as I did with GPT. It can get quite annoying. GPT kept telling me to use this and that and none of them were real projects.
impulser_|9 months ago
LLMs just guess, so you have to give it a cheatsheet to help it guess closer to what you want.
M4v3R|9 months ago
rcpt|9 months ago
abletonlive|9 months ago
I find for 90% of the things I'm doing LLM removes 90% of the starting friction and let me get to the part that I'm actually interested in. Of course I also develop professionally in a python stack and LLMs are 1 shotting a ton of stuff. My work is standard data pipelines and web apps.
I'm a tech lead at faang adjacent w/ 11YOE and the systems I work with are responsible for about half a billion dollars a year in transactions directly and growing. You could argue maybe my standards are lower than yours but I think if I was making deadly mistakes the company would have been on my ass by now or my peers would have caught them.
Everybody that I work with is getting valuable output from LLMs. We are using all the latest openAI models and have a business relationship with openAI. I don't think I'm even that good at prompting and mostly rely on "vibes". Half of the time I'm pointing the model to an example and telling it "in the style of X do X for me".
I feel like comments like these almost seem gaslight-y or maybe there's just a major expectation mismatch between people. Are you expecting LLMs to just do exactly what you say and your entire job is to sit back prompt the LLM? Maybe I'm just use to shit code but I've looked at many code bases and there is a huge variance in quality and the average is pretty poor. The average code that AI pumps out is much better.
oparin10|9 months ago
I use it mostly for Golang and Rust, I work building cloud infrastructure automation tools.
I'll try to give some examples, they may seem overly specific but it's the first things that popped into my head when thinking about the subject.
Personally, I found that LLMs consistently struggle with dependency injection patterns. They'll generate tightly coupled services that directly instantiate dependencies rather than accepting interfaces, making testing nearly impossible.
If I ask them to generate code and also their respective unit tests, they'll often just create a bunch of mocks or start importing mock libraries to compensate for their faulty implementation, rather than fixing the underlying architectural issues.
They consistently fail to understand architecture patterns, generating code where infrastructure concerns bleed into domain logic. When corrected, they'll make surface level changes while missing the fundamental design principle of accepting interfaces rather than concrete implementations, even when explicitly instructed that it should move things like side-effects to the application edges.
Despite tailoring prompts for different models based on guides and personal experience, I often spend 10+ minutes correcting the LLM's output when I could have written the functionality myself in half the time.
No, I'm not expecting LLMs to replace my job. I'm expecting them to produce code that follows fundamental design principles without requiring extensive rewriting. There's a vast middle ground between "LLMs do nothing well" and the productivity revolution being claimed.
That being said, I'm glad it's working out so well for you, I really wish I had the same experience.
thewebguyd|9 months ago
Python is generally fine, as you've experienced, as is JavaScript/TypeScript & React.
I've had mixed results with C# and PowerShell. With PowerShell, hallucinations are still a big problem. Not sure if it's the Noun-Verb naming scheme of cmdlets, but most models still make up cmdlets that don't exist on the fly (though will correct itself once you correct it that it doesn't exist but at that point - why bother when I can just do it myself correctly the first time).
With C#, even with my existing code as context, it can't adhere to a consistent style, and can't handle nullable reference types (albeit, a relatively new feature in C#). It works, but I have to spend too much time correcting it.
Given my own experiences and the stacks I work with, I still won't trust an LLM in agent mode. I make heavy use of them as a better Google, especially since Google has gone to shit, and to bounce ideas off of, but I'll still write the code myself. I don't like reviewing code, and having LLMs write code for me just turns me into a full time code reviewer, not something I'm terribly interested in becoming.
I still get a lot of value out of the tools, but for me I'm still hesitant to unleash them on my code directly. I'll stick with the chat interface for now.
edit Golang is another language I've had problems relying on LLMs for. On the flip side, LLMs have been great for me with SQL and I'm grateful for that.
codexon|9 months ago
Just right now, I've been feeding o4-mini with high effort a C++ file with a deadlock in it.
It has failed to fix the problem after 3 times, and it introduced a double free bug in one of the attempts. It did not see the double free problem until I pointed it out.
thr0waway39290|9 months ago
thefourthchime|9 months ago
bboygravity|9 months ago
Either you have no idea how terrible real world commercial software (architecture) is or you're vastly underestimating newer LLMs or both.
nurettin|9 months ago
tiahura|9 months ago
unknown|9 months ago
[deleted]
pdntspa|9 months ago
While far from perfect for large projects, controlling the scope of individual requests (with orchestrator/boomerang mode, for example) seems to do wonders
Given the sheer, uh, variety of code I see day to day in an enterprise setting, maybe the problem isn't with Gemini?
gxs|9 months ago
Never seen it fumble that around
Swear people act like humans themselves don’t ever need to be asked for clarification
mannycalavera42|9 months ago
SafeDusk|9 months ago
This minimal template might be helpful to you: https://github.com/aperoc/toolkami